Sep 9 00:15:36.980248 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:16:40 -00 2025 Sep 9 00:15:36.980270 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:15:36.980281 kernel: BIOS-provided physical RAM map: Sep 9 00:15:36.980288 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:15:36.980295 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:15:36.980301 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:15:36.980309 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:15:36.980316 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:15:36.980328 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:15:36.980335 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:15:36.980342 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Sep 9 00:15:36.980349 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:15:36.980355 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:15:36.980362 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:15:36.980373 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:15:36.980380 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:15:36.980390 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:15:36.980397 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:15:36.980404 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:15:36.980411 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:15:36.980418 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:15:36.980426 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:15:36.980433 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:15:36.980440 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:15:36.980447 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:15:36.980457 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:15:36.980464 kernel: NX (Execute Disable) protection: active Sep 9 00:15:36.980471 kernel: APIC: Static calls initialized Sep 9 00:15:36.980478 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Sep 9 00:15:36.980486 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Sep 9 00:15:36.980493 kernel: extended physical RAM map: Sep 9 00:15:36.980500 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:15:36.980507 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:15:36.980514 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:15:36.980522 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:15:36.980529 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:15:36.980539 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Sep 9 00:15:36.980546 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Sep 9 00:15:36.980553 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Sep 9 00:15:36.980560 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Sep 9 00:15:36.980571 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Sep 9 00:15:36.980578 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Sep 9 00:15:36.980588 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Sep 9 00:15:36.980596 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Sep 9 00:15:36.980603 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Sep 9 00:15:36.980611 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Sep 9 00:15:36.980618 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Sep 9 00:15:36.980626 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:15:36.980633 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Sep 9 00:15:36.980641 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Sep 9 00:15:36.980651 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Sep 9 00:15:36.980661 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Sep 9 00:15:36.980670 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Sep 9 00:15:36.980678 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:15:36.980685 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Sep 9 00:15:36.980693 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 00:15:36.980700 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Sep 9 00:15:36.980708 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 9 00:15:36.980717 kernel: efi: EFI v2.7 by EDK II Sep 9 00:15:36.980725 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Sep 9 00:15:36.980732 kernel: random: crng init done Sep 9 00:15:36.980742 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Sep 9 00:15:36.980749 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Sep 9 00:15:36.980761 kernel: secureboot: Secure boot disabled Sep 9 00:15:36.980769 kernel: SMBIOS 2.8 present. Sep 9 00:15:36.980776 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Sep 9 00:15:36.980784 kernel: DMI: Memory slots populated: 1/1 Sep 9 00:15:36.980791 kernel: Hypervisor detected: KVM Sep 9 00:15:36.980799 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:15:36.980806 kernel: kvm-clock: using sched offset of 5130313623 cycles Sep 9 00:15:36.980814 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:15:36.980822 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:15:36.980830 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:15:36.980840 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:15:36.981890 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Sep 9 00:15:36.981899 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 00:15:36.981907 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:15:36.981915 kernel: Using GB pages for direct mapping Sep 9 00:15:36.981935 kernel: ACPI: Early table checksum verification disabled Sep 9 00:15:36.981943 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 00:15:36.981951 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:15:36.981979 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:15:36.981992 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:15:36.982000 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 00:15:36.982008 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:15:36.982015 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:15:36.982023 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:15:36.982031 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:15:36.982039 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 00:15:36.982046 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 00:15:36.982054 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 00:15:36.982064 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 00:15:36.982072 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 00:15:36.982080 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 00:15:36.982087 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 00:15:36.982095 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 00:15:36.982102 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 00:15:36.982110 kernel: No NUMA configuration found Sep 9 00:15:36.982118 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Sep 9 00:15:36.982126 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Sep 9 00:15:36.982136 kernel: Zone ranges: Sep 9 00:15:36.982143 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:15:36.982152 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Sep 9 00:15:36.982159 kernel: Normal empty Sep 9 00:15:36.982167 kernel: Device empty Sep 9 00:15:36.982175 kernel: Movable zone start for each node Sep 9 00:15:36.982182 kernel: Early memory node ranges Sep 9 00:15:36.982190 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 00:15:36.982197 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 00:15:36.982215 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 00:15:36.982225 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Sep 9 00:15:36.982233 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Sep 9 00:15:36.982241 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Sep 9 00:15:36.982249 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Sep 9 00:15:36.982257 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Sep 9 00:15:36.982264 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Sep 9 00:15:36.982274 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:15:36.982282 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 00:15:36.982299 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 00:15:36.982307 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:15:36.982315 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Sep 9 00:15:36.982323 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Sep 9 00:15:36.982333 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Sep 9 00:15:36.982341 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Sep 9 00:15:36.982349 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Sep 9 00:15:36.982357 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:15:36.982365 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:15:36.982376 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:15:36.982384 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:15:36.982392 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:15:36.982400 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:15:36.982408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:15:36.982416 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:15:36.982424 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:15:36.982432 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:15:36.982440 kernel: TSC deadline timer available Sep 9 00:15:36.982450 kernel: CPU topo: Max. logical packages: 1 Sep 9 00:15:36.982458 kernel: CPU topo: Max. logical dies: 1 Sep 9 00:15:36.982466 kernel: CPU topo: Max. dies per package: 1 Sep 9 00:15:36.982474 kernel: CPU topo: Max. threads per core: 1 Sep 9 00:15:36.982482 kernel: CPU topo: Num. cores per package: 4 Sep 9 00:15:36.982490 kernel: CPU topo: Num. threads per package: 4 Sep 9 00:15:36.982497 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Sep 9 00:15:36.982505 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:15:36.982513 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:15:36.982524 kernel: kvm-guest: setup PV sched yield Sep 9 00:15:36.982532 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Sep 9 00:15:36.982540 kernel: Booting paravirtualized kernel on KVM Sep 9 00:15:36.982548 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:15:36.982556 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:15:36.982564 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Sep 9 00:15:36.982572 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Sep 9 00:15:36.982580 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:15:36.982588 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:15:36.982598 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:15:36.982608 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:15:36.982619 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:15:36.982627 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:15:36.982635 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:15:36.982643 kernel: Fallback order for Node 0: 0 Sep 9 00:15:36.982651 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Sep 9 00:15:36.982659 kernel: Policy zone: DMA32 Sep 9 00:15:36.982669 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:15:36.982677 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:15:36.982685 kernel: ftrace: allocating 40099 entries in 157 pages Sep 9 00:15:36.982699 kernel: ftrace: allocated 157 pages with 5 groups Sep 9 00:15:36.982707 kernel: Dynamic Preempt: voluntary Sep 9 00:15:36.982715 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:15:36.982724 kernel: rcu: RCU event tracing is enabled. Sep 9 00:15:36.982732 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:15:36.982740 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:15:36.982748 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:15:36.982760 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:15:36.982768 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:15:36.982779 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:15:36.982787 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:15:36.982795 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:15:36.982803 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:15:36.982811 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:15:36.982820 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:15:36.982830 kernel: Console: colour dummy device 80x25 Sep 9 00:15:36.982838 kernel: printk: legacy console [ttyS0] enabled Sep 9 00:15:36.982846 kernel: ACPI: Core revision 20240827 Sep 9 00:15:36.982854 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:15:36.982862 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:15:36.982870 kernel: x2apic enabled Sep 9 00:15:36.982888 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:15:36.982897 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:15:36.982916 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:15:36.982937 kernel: kvm-guest: setup PV IPIs Sep 9 00:15:36.982948 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:15:36.982957 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:15:36.982965 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:15:36.982973 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:15:36.982981 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:15:36.982989 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:15:36.982997 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:15:36.983005 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:15:36.983015 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:15:36.983029 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:15:36.983038 kernel: active return thunk: retbleed_return_thunk Sep 9 00:15:36.983046 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:15:36.983056 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:15:36.983068 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:15:36.983080 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:15:36.983089 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:15:36.983097 kernel: active return thunk: srso_return_thunk Sep 9 00:15:36.983109 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:15:36.983117 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:15:36.983125 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:15:36.983133 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:15:36.983141 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:15:36.983149 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:15:36.983157 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:15:36.983165 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:15:36.983173 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 00:15:36.983184 kernel: landlock: Up and running. Sep 9 00:15:36.983192 kernel: SELinux: Initializing. Sep 9 00:15:36.983200 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:15:36.983217 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:15:36.983225 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:15:36.983234 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:15:36.983241 kernel: ... version: 0 Sep 9 00:15:36.983249 kernel: ... bit width: 48 Sep 9 00:15:36.983257 kernel: ... generic registers: 6 Sep 9 00:15:36.983267 kernel: ... value mask: 0000ffffffffffff Sep 9 00:15:36.983275 kernel: ... max period: 00007fffffffffff Sep 9 00:15:36.983283 kernel: ... fixed-purpose events: 0 Sep 9 00:15:36.983291 kernel: ... event mask: 000000000000003f Sep 9 00:15:36.983299 kernel: signal: max sigframe size: 1776 Sep 9 00:15:36.983307 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:15:36.983317 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:15:36.983325 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 00:15:36.983333 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:15:36.983344 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:15:36.983352 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:15:36.983359 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:15:36.983368 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:15:36.983376 kernel: Memory: 2424720K/2565800K available (14336K kernel code, 2428K rwdata, 9956K rodata, 53832K init, 1088K bss, 135148K reserved, 0K cma-reserved) Sep 9 00:15:36.983384 kernel: devtmpfs: initialized Sep 9 00:15:36.983392 kernel: x86/mm: Memory block size: 128MB Sep 9 00:15:36.983400 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 00:15:36.983408 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 00:15:36.983418 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Sep 9 00:15:36.983426 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 00:15:36.983434 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Sep 9 00:15:36.983442 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 00:15:36.983450 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:15:36.983458 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:15:36.983466 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:15:36.983474 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:15:36.983484 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:15:36.983492 kernel: audit: type=2000 audit(1757376933.674:1): state=initialized audit_enabled=0 res=1 Sep 9 00:15:36.983500 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:15:36.983508 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:15:36.983516 kernel: cpuidle: using governor menu Sep 9 00:15:36.983524 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:15:36.983532 kernel: dca service started, version 1.12.1 Sep 9 00:15:36.983540 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Sep 9 00:15:36.983548 kernel: PCI: Using configuration type 1 for base access Sep 9 00:15:36.983558 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:15:36.983566 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:15:36.983574 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:15:36.983582 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:15:36.983590 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:15:36.983598 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:15:36.983606 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:15:36.983614 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:15:36.983621 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:15:36.983631 kernel: ACPI: Interpreter enabled Sep 9 00:15:36.983639 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:15:36.983647 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:15:36.983655 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:15:36.983663 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:15:36.983671 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:15:36.983679 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:15:36.983907 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:15:36.984056 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:15:36.984176 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:15:36.984186 kernel: PCI host bridge to bus 0000:00 Sep 9 00:15:36.984327 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:15:36.984439 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:15:36.984552 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:15:36.984666 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Sep 9 00:15:36.984778 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Sep 9 00:15:36.984885 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:15:36.985014 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:15:36.985161 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Sep 9 00:15:36.985311 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Sep 9 00:15:36.985434 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Sep 9 00:15:36.985560 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Sep 9 00:15:36.985681 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Sep 9 00:15:36.985801 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:15:36.985954 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 00:15:36.986080 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Sep 9 00:15:36.986201 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Sep 9 00:15:36.986334 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Sep 9 00:15:36.986470 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Sep 9 00:15:36.986597 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Sep 9 00:15:36.986718 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Sep 9 00:15:36.986838 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Sep 9 00:15:36.986990 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Sep 9 00:15:36.987112 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Sep 9 00:15:36.987242 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Sep 9 00:15:36.987368 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Sep 9 00:15:36.987488 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Sep 9 00:15:36.987624 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Sep 9 00:15:36.987745 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:15:36.987905 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Sep 9 00:15:36.988072 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Sep 9 00:15:36.988194 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Sep 9 00:15:36.988349 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Sep 9 00:15:36.988471 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Sep 9 00:15:36.988482 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:15:36.988491 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:15:36.988499 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:15:36.988507 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:15:36.988515 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:15:36.988523 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:15:36.988535 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:15:36.988543 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:15:36.988551 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:15:36.988559 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:15:36.988567 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:15:36.988575 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:15:36.988583 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:15:36.988591 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:15:36.988599 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:15:36.988609 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:15:36.988619 kernel: iommu: Default domain type: Translated Sep 9 00:15:36.988628 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:15:36.988638 kernel: efivars: Registered efivars operations Sep 9 00:15:36.988646 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:15:36.988654 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:15:36.988662 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 00:15:36.988670 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Sep 9 00:15:36.988678 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Sep 9 00:15:36.988688 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Sep 9 00:15:36.988696 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Sep 9 00:15:36.988703 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Sep 9 00:15:36.988711 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Sep 9 00:15:36.988719 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Sep 9 00:15:36.988839 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:15:36.988974 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:15:36.989094 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:15:36.989108 kernel: vgaarb: loaded Sep 9 00:15:36.989117 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:15:36.989126 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:15:36.989134 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:15:36.989142 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:15:36.989150 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:15:36.989159 kernel: pnp: PnP ACPI init Sep 9 00:15:36.989333 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Sep 9 00:15:36.989351 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:15:36.989360 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:15:36.989369 kernel: NET: Registered PF_INET protocol family Sep 9 00:15:36.989378 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:15:36.989386 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:15:36.989395 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:15:36.989403 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:15:36.989411 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:15:36.989420 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:15:36.989430 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:15:36.989439 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:15:36.989447 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:15:36.989455 kernel: NET: Registered PF_XDP protocol family Sep 9 00:15:36.989577 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Sep 9 00:15:36.989699 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Sep 9 00:15:36.989809 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:15:36.989943 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:15:36.990063 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:15:36.990172 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Sep 9 00:15:36.990302 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Sep 9 00:15:36.990414 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Sep 9 00:15:36.990425 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:15:36.990433 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Sep 9 00:15:36.990449 kernel: Initialise system trusted keyrings Sep 9 00:15:36.990463 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:15:36.990471 kernel: Key type asymmetric registered Sep 9 00:15:36.990479 kernel: Asymmetric key parser 'x509' registered Sep 9 00:15:36.990488 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 9 00:15:36.990499 kernel: io scheduler mq-deadline registered Sep 9 00:15:36.990508 kernel: io scheduler kyber registered Sep 9 00:15:36.990516 kernel: io scheduler bfq registered Sep 9 00:15:36.990527 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:15:36.990536 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:15:36.990545 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:15:36.990553 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:15:36.990562 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:15:36.990570 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:15:36.990579 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:15:36.990587 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:15:36.990596 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:15:36.990743 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:15:36.990756 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:15:36.990873 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:15:36.991008 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:15:36 UTC (1757376936) Sep 9 00:15:36.991123 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Sep 9 00:15:36.991134 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:15:36.991142 kernel: efifb: probing for efifb Sep 9 00:15:36.991151 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Sep 9 00:15:36.991164 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Sep 9 00:15:36.991172 kernel: efifb: scrolling: redraw Sep 9 00:15:36.991181 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Sep 9 00:15:36.991193 kernel: Console: switching to colour frame buffer device 160x50 Sep 9 00:15:36.991201 kernel: fb0: EFI VGA frame buffer device Sep 9 00:15:36.991218 kernel: pstore: Using crash dump compression: deflate Sep 9 00:15:36.991227 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 00:15:36.991235 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:15:36.991244 kernel: Segment Routing with IPv6 Sep 9 00:15:36.991255 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:15:36.991264 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:15:36.991272 kernel: Key type dns_resolver registered Sep 9 00:15:36.991280 kernel: IPI shorthand broadcast: enabled Sep 9 00:15:36.991289 kernel: sched_clock: Marking stable (3269003990, 171416976)->(3488674400, -48253434) Sep 9 00:15:36.991297 kernel: registered taskstats version 1 Sep 9 00:15:36.991306 kernel: Loading compiled-in X.509 certificates Sep 9 00:15:36.991315 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 08d0986253b18b7fd74c2cc5404da4ba92260e75' Sep 9 00:15:36.991323 kernel: Demotion targets for Node 0: null Sep 9 00:15:36.991334 kernel: Key type .fscrypt registered Sep 9 00:15:36.991342 kernel: Key type fscrypt-provisioning registered Sep 9 00:15:36.991350 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:15:36.991358 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:15:36.991367 kernel: ima: No architecture policies found Sep 9 00:15:36.991375 kernel: clk: Disabling unused clocks Sep 9 00:15:36.991383 kernel: Warning: unable to open an initial console. Sep 9 00:15:36.991392 kernel: Freeing unused kernel image (initmem) memory: 53832K Sep 9 00:15:36.991400 kernel: Write protecting the kernel read-only data: 24576k Sep 9 00:15:36.991411 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Sep 9 00:15:36.991419 kernel: Run /init as init process Sep 9 00:15:36.991427 kernel: with arguments: Sep 9 00:15:36.991435 kernel: /init Sep 9 00:15:36.991443 kernel: with environment: Sep 9 00:15:36.991451 kernel: HOME=/ Sep 9 00:15:36.991460 kernel: TERM=linux Sep 9 00:15:36.991468 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:15:36.991477 systemd[1]: Successfully made /usr/ read-only. Sep 9 00:15:36.991494 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:15:36.991505 systemd[1]: Detected virtualization kvm. Sep 9 00:15:36.991516 systemd[1]: Detected architecture x86-64. Sep 9 00:15:36.991528 systemd[1]: Running in initrd. Sep 9 00:15:36.991538 systemd[1]: No hostname configured, using default hostname. Sep 9 00:15:36.991548 systemd[1]: Hostname set to . Sep 9 00:15:36.991556 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:15:36.991568 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:15:36.991577 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:15:36.991586 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:15:36.991596 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:15:36.991605 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:15:36.991614 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:15:36.991624 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:15:36.991637 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:15:36.991646 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:15:36.991655 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:15:36.991663 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:15:36.991674 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:15:36.991684 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:15:36.991695 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:15:36.991704 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:15:36.991715 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:15:36.991724 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:15:36.991733 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:15:36.991742 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 00:15:36.991751 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:15:36.991760 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:15:36.991769 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:15:36.991778 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:15:36.991786 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:15:36.991797 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:15:36.991806 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:15:36.991818 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 00:15:36.991827 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:15:36.991836 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:15:36.991845 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:15:36.991855 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:15:36.991866 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:15:36.991881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:15:36.991915 systemd-journald[222]: Collecting audit messages is disabled. Sep 9 00:15:36.991956 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:15:36.991968 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:15:36.991978 systemd-journald[222]: Journal started Sep 9 00:15:36.991998 systemd-journald[222]: Runtime Journal (/run/log/journal/aaa2039cc0da4214ae00c587d65c621e) is 6M, max 48.5M, 42.4M free. Sep 9 00:15:36.978256 systemd-modules-load[223]: Inserted module 'overlay' Sep 9 00:15:36.994851 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:15:36.996097 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:15:37.036162 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:15:37.054737 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:15:37.057374 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:15:37.058639 systemd-tmpfiles[233]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 00:15:37.063898 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:15:37.069784 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:15:37.069815 kernel: Bridge firewalling registered Sep 9 00:15:37.066942 systemd-modules-load[223]: Inserted module 'br_netfilter' Sep 9 00:15:37.067132 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:15:37.076055 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:15:37.078115 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:15:37.085992 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:15:37.091901 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:15:37.094213 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:15:37.100720 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:15:37.114242 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:15:37.145115 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c495f73c03808403ea4f55eb54c843aae6678d256d64068b1371f8afce28979a Sep 9 00:15:37.166459 systemd-resolved[256]: Positive Trust Anchors: Sep 9 00:15:37.167405 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:15:37.167437 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:15:37.170012 systemd-resolved[256]: Defaulting to hostname 'linux'. Sep 9 00:15:37.171279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:15:37.176147 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:15:37.284963 kernel: SCSI subsystem initialized Sep 9 00:15:37.294963 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:15:37.320957 kernel: iscsi: registered transport (tcp) Sep 9 00:15:37.342094 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:15:37.342144 kernel: QLogic iSCSI HBA Driver Sep 9 00:15:37.361729 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:15:37.378323 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:15:37.380837 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:15:37.435321 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:15:37.443844 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:15:37.511967 kernel: raid6: avx2x4 gen() 28546 MB/s Sep 9 00:15:37.528955 kernel: raid6: avx2x2 gen() 29736 MB/s Sep 9 00:15:37.546118 kernel: raid6: avx2x1 gen() 24328 MB/s Sep 9 00:15:37.546154 kernel: raid6: using algorithm avx2x2 gen() 29736 MB/s Sep 9 00:15:37.564130 kernel: raid6: .... xor() 19006 MB/s, rmw enabled Sep 9 00:15:37.564171 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:15:37.584956 kernel: xor: automatically using best checksumming function avx Sep 9 00:15:37.762961 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:15:37.771555 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:15:37.774917 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:15:37.815808 systemd-udevd[472]: Using default interface naming scheme 'v255'. Sep 9 00:15:37.823293 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:15:37.827036 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:15:37.863037 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Sep 9 00:15:37.895658 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:15:37.899035 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:15:38.046421 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:15:38.055417 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:15:38.102949 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:15:38.136892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:15:38.137015 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:15:38.151866 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:15:38.156196 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:15:38.158717 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:15:38.164095 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:15:38.164563 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:15:38.171763 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:15:38.173997 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:15:38.176959 kernel: libata version 3.00 loaded. Sep 9 00:15:38.180088 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:15:38.183870 kernel: AES CTR mode by8 optimization enabled Sep 9 00:15:38.186273 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 00:15:38.190489 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:15:38.190521 kernel: GPT:9289727 != 19775487 Sep 9 00:15:38.190537 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:15:38.190559 kernel: GPT:9289727 != 19775487 Sep 9 00:15:38.191493 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:15:38.191523 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:15:38.194959 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:15:38.197006 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:15:38.199409 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Sep 9 00:15:38.199588 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Sep 9 00:15:38.199739 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:15:38.211987 kernel: scsi host0: ahci Sep 9 00:15:38.212207 kernel: scsi host1: ahci Sep 9 00:15:38.213248 kernel: scsi host2: ahci Sep 9 00:15:38.214317 kernel: scsi host3: ahci Sep 9 00:15:38.215904 kernel: scsi host4: ahci Sep 9 00:15:38.218946 kernel: scsi host5: ahci Sep 9 00:15:38.219533 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 1 Sep 9 00:15:38.221347 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 1 Sep 9 00:15:38.221370 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 1 Sep 9 00:15:38.224068 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 1 Sep 9 00:15:38.224109 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 1 Sep 9 00:15:38.225500 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 1 Sep 9 00:15:38.230620 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:15:38.246692 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:15:38.275983 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:15:38.284047 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:15:38.285332 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:15:38.296976 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:15:38.299285 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:15:38.545962 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:15:38.546071 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:15:38.546946 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:15:38.546964 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:15:38.548203 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:15:38.548223 kernel: ata3.00: applying bridge limits Sep 9 00:15:38.548944 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:15:38.549961 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:15:38.550961 kernel: ata3.00: LPM support broken, forcing max_power Sep 9 00:15:38.551945 kernel: ata3.00: configured for UDMA/100 Sep 9 00:15:38.551967 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:15:38.552964 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:15:38.575891 disk-uuid[632]: Primary Header is updated. Sep 9 00:15:38.575891 disk-uuid[632]: Secondary Entries is updated. Sep 9 00:15:38.575891 disk-uuid[632]: Secondary Header is updated. Sep 9 00:15:38.580951 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:15:38.586959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:15:38.602963 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:15:38.603245 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:15:38.614989 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:15:39.008725 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:15:39.010542 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:15:39.012300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:15:39.013584 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:15:39.015601 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:15:39.037717 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:15:39.593838 disk-uuid[633]: The operation has completed successfully. Sep 9 00:15:39.595169 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:15:39.628707 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:15:39.628864 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:15:39.669693 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:15:39.710655 sh[662]: Success Sep 9 00:15:39.729227 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:15:39.729269 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:15:39.730284 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 00:15:39.740960 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Sep 9 00:15:39.775915 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:15:39.778718 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:15:39.794143 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:15:39.803858 kernel: BTRFS: device fsid c483a4f4-f0a7-42f4-ac8d-111955dab3a7 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (674) Sep 9 00:15:39.803904 kernel: BTRFS info (device dm-0): first mount of filesystem c483a4f4-f0a7-42f4-ac8d-111955dab3a7 Sep 9 00:15:39.803933 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:15:39.810007 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:15:39.810033 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 00:15:39.811431 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:15:39.812217 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:15:39.813507 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:15:39.815208 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:15:39.816404 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:15:39.872987 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (709) Sep 9 00:15:39.875803 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:15:39.875834 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:15:39.878945 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:15:39.878971 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:15:39.883952 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:15:39.884874 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:15:39.888084 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:15:40.023148 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:15:40.025973 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:15:40.116846 ignition[756]: Ignition 2.21.0 Sep 9 00:15:40.116862 ignition[756]: Stage: fetch-offline Sep 9 00:15:40.116940 ignition[756]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:15:40.116956 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:15:40.117084 ignition[756]: parsed url from cmdline: "" Sep 9 00:15:40.117088 ignition[756]: no config URL provided Sep 9 00:15:40.117094 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:15:40.117104 ignition[756]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:15:40.130609 systemd-networkd[848]: lo: Link UP Sep 9 00:15:40.117143 ignition[756]: op(1): [started] loading QEMU firmware config module Sep 9 00:15:40.130614 systemd-networkd[848]: lo: Gained carrier Sep 9 00:15:40.117149 ignition[756]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:15:40.132248 systemd-networkd[848]: Enumeration completed Sep 9 00:15:40.132396 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:15:40.132609 systemd-networkd[848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:15:40.132614 systemd-networkd[848]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:15:40.137295 systemd-networkd[848]: eth0: Link UP Sep 9 00:15:40.146144 systemd[1]: Reached target network.target - Network. Sep 9 00:15:40.149238 systemd-networkd[848]: eth0: Gained carrier Sep 9 00:15:40.149248 systemd-networkd[848]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:15:40.166669 ignition[756]: op(1): [finished] loading QEMU firmware config module Sep 9 00:15:40.169990 systemd-networkd[848]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:15:40.205820 ignition[756]: parsing config with SHA512: 9d76a9ee138502cf8b39a44cfcaa81b1d55455534b7db300e5a4d9120d85073dfa33866fe1e73a4679bea0ecf1af6d1501f0f122236b2259744bf22bb28402b9 Sep 9 00:15:40.210074 unknown[756]: fetched base config from "system" Sep 9 00:15:40.210088 unknown[756]: fetched user config from "qemu" Sep 9 00:15:40.210431 ignition[756]: fetch-offline: fetch-offline passed Sep 9 00:15:40.210486 ignition[756]: Ignition finished successfully Sep 9 00:15:40.213718 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:15:40.216204 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:15:40.218229 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:15:40.266010 ignition[857]: Ignition 2.21.0 Sep 9 00:15:40.266024 ignition[857]: Stage: kargs Sep 9 00:15:40.266200 ignition[857]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:15:40.266213 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:15:40.267418 ignition[857]: kargs: kargs passed Sep 9 00:15:40.267516 ignition[857]: Ignition finished successfully Sep 9 00:15:40.275816 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:15:40.278888 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:15:40.318648 ignition[865]: Ignition 2.21.0 Sep 9 00:15:40.318665 ignition[865]: Stage: disks Sep 9 00:15:40.318836 ignition[865]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:15:40.318850 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:15:40.319742 ignition[865]: disks: disks passed Sep 9 00:15:40.319792 ignition[865]: Ignition finished successfully Sep 9 00:15:40.355580 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:15:40.356558 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:15:40.358200 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:15:40.358513 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:15:40.358841 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:15:40.359339 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:15:40.360727 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:15:40.390367 systemd-fsck[875]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 00:15:40.629193 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:15:40.630882 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:15:40.761950 kernel: EXT4-fs (vda9): mounted filesystem 4b59fff7-9272-4156-91f8-37989d927dc6 r/w with ordered data mode. Quota mode: none. Sep 9 00:15:40.762400 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:15:40.763900 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:15:40.766831 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:15:40.768646 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:15:40.769742 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:15:40.769783 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:15:40.769806 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:15:40.783992 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:15:40.786700 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:15:40.792163 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (883) Sep 9 00:15:40.792190 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:15:40.792204 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:15:40.794955 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:15:40.794993 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:15:40.797493 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:15:40.828951 initrd-setup-root[907]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:15:40.834572 initrd-setup-root[914]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:15:40.840904 initrd-setup-root[921]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:15:40.845891 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:15:40.939938 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:15:40.943734 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:15:40.987680 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:15:40.995334 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:15:40.997140 kernel: BTRFS info (device vda6): last unmount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:15:41.014555 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:15:41.029830 ignition[998]: INFO : Ignition 2.21.0 Sep 9 00:15:41.029830 ignition[998]: INFO : Stage: mount Sep 9 00:15:41.032220 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:15:41.032220 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:15:41.032220 ignition[998]: INFO : mount: mount passed Sep 9 00:15:41.032220 ignition[998]: INFO : Ignition finished successfully Sep 9 00:15:41.034747 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:15:41.037337 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:15:41.057006 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:15:41.106390 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1010) Sep 9 00:15:41.106426 kernel: BTRFS info (device vda6): first mount of filesystem 1ca5876a-e169-4e15-a56e-4292fa8c609f Sep 9 00:15:41.106438 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:15:41.110945 kernel: BTRFS info (device vda6): turning on async discard Sep 9 00:15:41.110969 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 00:15:41.112714 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:15:41.146292 ignition[1027]: INFO : Ignition 2.21.0 Sep 9 00:15:41.146292 ignition[1027]: INFO : Stage: files Sep 9 00:15:41.148355 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:15:41.148355 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:15:41.150601 ignition[1027]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:15:41.151985 ignition[1027]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:15:41.151985 ignition[1027]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:15:41.155191 ignition[1027]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:15:41.155191 ignition[1027]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:15:41.154980 unknown[1027]: wrote ssh authorized keys file for user: core Sep 9 00:15:41.159161 ignition[1027]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:15:41.163195 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 00:15:41.165414 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 9 00:15:41.199148 systemd-networkd[848]: eth0: Gained IPv6LL Sep 9 00:15:41.224234 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:15:41.794778 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 9 00:15:41.797102 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:15:41.797102 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:15:41.797102 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:15:41.797102 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:15:41.797102 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:15:41.797102 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:15:41.797102 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:15:41.797102 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:15:41.864270 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:15:41.866220 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:15:41.866220 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:15:41.922165 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:15:41.922165 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:15:41.926597 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 9 00:15:42.361371 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:15:42.998847 ignition[1027]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 9 00:15:42.998847 ignition[1027]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 00:15:43.003195 ignition[1027]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:15:43.151213 ignition[1027]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:15:43.151213 ignition[1027]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 00:15:43.151213 ignition[1027]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 00:15:43.151213 ignition[1027]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:15:43.191457 ignition[1027]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:15:43.191457 ignition[1027]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 00:15:43.191457 ignition[1027]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:15:43.209969 ignition[1027]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:15:43.219446 ignition[1027]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:15:43.221234 ignition[1027]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:15:43.221234 ignition[1027]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:15:43.221234 ignition[1027]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:15:43.221234 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:15:43.221234 ignition[1027]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:15:43.221234 ignition[1027]: INFO : files: files passed Sep 9 00:15:43.221234 ignition[1027]: INFO : Ignition finished successfully Sep 9 00:15:43.231278 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:15:43.234284 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:15:43.236616 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:15:43.252274 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:15:43.252394 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:15:43.255667 initrd-setup-root-after-ignition[1056]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:15:43.259625 initrd-setup-root-after-ignition[1058]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:15:43.261503 initrd-setup-root-after-ignition[1058]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:15:43.263206 initrd-setup-root-after-ignition[1062]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:15:43.265872 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:15:43.268260 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:15:43.270467 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:15:43.327225 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:15:43.327875 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:15:43.329153 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:15:43.329529 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:15:43.329915 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:15:43.337048 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:15:43.381395 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:15:43.384419 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:15:43.406358 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:15:43.408832 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:15:43.409432 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:15:43.409774 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:15:43.409981 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:15:43.413715 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:15:43.414273 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:15:43.414636 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:15:43.415037 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:15:43.415559 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:15:43.415942 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 00:15:43.416482 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:15:43.416845 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:15:43.417243 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:15:43.417599 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:15:43.417976 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:15:43.418475 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:15:43.418628 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:15:43.438678 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:15:43.439313 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:15:43.442262 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:15:43.444193 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:15:43.444856 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:15:43.445065 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:15:43.450558 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:15:43.450728 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:15:43.451449 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:15:43.454254 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:15:43.459053 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:15:43.459706 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:15:43.460299 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:15:43.460646 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:15:43.460759 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:15:43.465448 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:15:43.465536 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:15:43.467300 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:15:43.467426 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:15:43.468944 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:15:43.469064 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:15:43.473377 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:15:43.473803 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:15:43.473937 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:15:43.475165 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:15:43.478419 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:15:43.478548 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:15:43.480306 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:15:43.480455 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:15:43.490158 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:15:43.522230 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:15:43.542792 ignition[1082]: INFO : Ignition 2.21.0 Sep 9 00:15:43.542792 ignition[1082]: INFO : Stage: umount Sep 9 00:15:43.544735 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:15:43.544735 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:15:43.544735 ignition[1082]: INFO : umount: umount passed Sep 9 00:15:43.544735 ignition[1082]: INFO : Ignition finished successfully Sep 9 00:15:43.548458 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:15:43.551067 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:15:43.551201 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:15:43.552114 systemd[1]: Stopped target network.target - Network. Sep 9 00:15:43.555766 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:15:43.555849 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:15:43.557398 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:15:43.557458 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:15:43.557732 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:15:43.557813 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:15:43.558325 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:15:43.558382 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:15:43.558808 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:15:43.565162 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:15:43.573004 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:15:43.573199 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:15:43.577804 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 00:15:43.578199 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:15:43.578346 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:15:43.582255 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 00:15:43.583073 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 00:15:43.585834 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:15:43.585910 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:15:43.589297 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:15:43.591252 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:15:43.591326 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:15:43.591757 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:15:43.591816 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:15:43.596493 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:15:43.596550 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:15:43.597231 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:15:43.597296 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:15:43.641264 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:15:43.642802 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 00:15:43.642879 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:15:43.656705 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:15:43.656897 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:15:43.657412 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:15:43.657459 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:15:43.660276 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:15:43.660312 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:15:43.663957 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:15:43.664019 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:15:43.666612 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:15:43.666697 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:15:43.669138 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:15:43.669279 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:15:43.675770 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:15:43.678505 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 00:15:43.678591 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:15:43.681105 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:15:43.681175 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:15:43.684841 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:15:43.684911 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:15:43.689549 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 00:15:43.689634 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 00:15:43.689703 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 00:15:43.690293 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:15:43.690429 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:15:43.691707 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:15:43.691831 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:15:44.261261 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:15:44.261427 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:15:44.262057 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:15:44.263802 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:15:44.263858 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:15:44.265519 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:15:44.295430 systemd[1]: Switching root. Sep 9 00:15:44.390539 systemd-journald[222]: Journal stopped Sep 9 00:15:46.806842 systemd-journald[222]: Received SIGTERM from PID 1 (systemd). Sep 9 00:15:46.806969 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:15:46.806987 kernel: SELinux: policy capability open_perms=1 Sep 9 00:15:46.806999 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:15:46.807010 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:15:46.807021 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:15:46.807841 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:15:46.807874 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:15:46.807886 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:15:46.807897 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 00:15:46.807909 kernel: audit: type=1403 audit(1757376945.594:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:15:46.807957 systemd[1]: Successfully loaded SELinux policy in 50.248ms. Sep 9 00:15:46.807987 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.252ms. Sep 9 00:15:46.808002 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 00:15:46.808019 systemd[1]: Detected virtualization kvm. Sep 9 00:15:46.808031 systemd[1]: Detected architecture x86-64. Sep 9 00:15:46.808043 systemd[1]: Detected first boot. Sep 9 00:15:46.808055 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:15:46.808067 zram_generator::config[1127]: No configuration found. Sep 9 00:15:46.808080 kernel: Guest personality initialized and is inactive Sep 9 00:15:46.808092 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Sep 9 00:15:46.808103 kernel: Initialized host personality Sep 9 00:15:46.808114 kernel: NET: Registered PF_VSOCK protocol family Sep 9 00:15:46.808129 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:15:46.808144 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 00:15:46.808156 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:15:46.808169 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:15:46.808181 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:15:46.808193 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:15:46.808205 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:15:46.808217 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:15:46.808232 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:15:46.808244 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:15:46.808256 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:15:46.808272 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:15:46.808284 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:15:46.808296 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:15:46.808308 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:15:46.808320 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:15:46.808332 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:15:46.808347 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:15:46.808360 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:15:46.808372 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:15:46.808384 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:15:46.808396 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:15:46.808407 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:15:46.808420 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:15:46.808432 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:15:46.808446 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:15:46.808458 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:15:46.808470 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:15:46.808482 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:15:46.808493 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:15:46.808505 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:15:46.808517 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:15:46.808531 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 00:15:46.808543 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:15:46.808557 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:15:46.808569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:15:46.808581 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:15:46.808593 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:15:46.808605 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:15:46.808617 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:15:46.808629 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:15:46.808644 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:15:46.808668 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:15:46.808684 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:15:46.808697 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:15:46.808709 systemd[1]: Reached target machines.target - Containers. Sep 9 00:15:46.808721 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:15:46.808733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:15:46.808745 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:15:46.808757 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:15:46.808769 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:15:46.808783 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:15:46.808796 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:15:46.808809 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:15:46.808821 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:15:46.808834 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:15:46.808847 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:15:46.808859 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:15:46.808870 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:15:46.808882 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:15:46.808897 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:15:46.808909 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:15:46.809016 kernel: loop: module loaded Sep 9 00:15:46.809030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:15:46.809042 kernel: fuse: init (API version 7.41) Sep 9 00:15:46.809053 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:15:46.809066 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:15:46.809077 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 00:15:46.809093 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:15:46.809105 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:15:46.809117 systemd[1]: Stopped verity-setup.service. Sep 9 00:15:46.809130 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:15:46.809147 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:15:46.809161 kernel: ACPI: bus type drm_connector registered Sep 9 00:15:46.809175 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:15:46.809187 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:15:46.809240 systemd-journald[1191]: Collecting audit messages is disabled. Sep 9 00:15:46.809267 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:15:46.809280 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:15:46.809292 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:15:46.809305 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:15:46.809317 systemd-journald[1191]: Journal started Sep 9 00:15:46.809341 systemd-journald[1191]: Runtime Journal (/run/log/journal/aaa2039cc0da4214ae00c587d65c621e) is 6M, max 48.5M, 42.4M free. Sep 9 00:15:46.370635 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:15:46.391596 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:15:46.392222 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:15:46.824968 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:15:46.826892 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:15:46.827246 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:15:46.829042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:15:46.829301 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:15:46.831081 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:15:46.831412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:15:46.833061 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:15:46.833300 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:15:46.834991 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:15:46.835220 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:15:46.873866 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:15:46.874194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:15:46.876090 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:15:46.877801 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:15:46.879495 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:15:46.881267 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 00:15:46.894144 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:15:46.899754 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:15:46.902884 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:15:46.905337 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:15:46.906511 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:15:46.906544 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:15:46.908769 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 00:15:46.919780 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:15:46.980054 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:15:46.982509 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:15:46.985150 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:15:46.987065 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:15:46.997598 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:15:46.999587 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:15:47.003029 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:15:47.003704 systemd-journald[1191]: Time spent on flushing to /var/log/journal/aaa2039cc0da4214ae00c587d65c621e is 19.726ms for 1069 entries. Sep 9 00:15:47.003704 systemd-journald[1191]: System Journal (/var/log/journal/aaa2039cc0da4214ae00c587d65c621e) is 8M, max 195.6M, 187.6M free. Sep 9 00:15:47.413275 systemd-journald[1191]: Received client request to flush runtime journal. Sep 9 00:15:47.413380 kernel: loop0: detected capacity change from 0 to 113872 Sep 9 00:15:47.413424 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:15:47.413448 kernel: loop1: detected capacity change from 0 to 146240 Sep 9 00:15:47.413471 kernel: loop2: detected capacity change from 0 to 224512 Sep 9 00:15:47.413501 kernel: loop3: detected capacity change from 0 to 113872 Sep 9 00:15:47.413521 kernel: loop4: detected capacity change from 0 to 146240 Sep 9 00:15:47.007098 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:15:47.010688 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:15:47.012100 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:15:47.040650 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:15:47.051218 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:15:47.126988 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:15:47.282686 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:15:47.284736 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:15:47.288882 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 00:15:47.416649 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:15:47.420056 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:15:47.424432 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:15:47.517963 kernel: loop5: detected capacity change from 0 to 224512 Sep 9 00:15:47.546557 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 9 00:15:47.546579 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 9 00:15:47.558167 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:15:47.673354 (sd-merge)[1261]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:15:47.674202 (sd-merge)[1261]: Merged extensions into '/usr'. Sep 9 00:15:47.681722 systemd[1]: Reload requested from client PID 1246 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:15:47.681744 systemd[1]: Reloading... Sep 9 00:15:47.759969 zram_generator::config[1293]: No configuration found. Sep 9 00:15:48.155758 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:15:48.250480 systemd[1]: Reloading finished in 568 ms. Sep 9 00:15:48.272812 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:15:48.296260 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:15:48.298537 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 00:15:48.308044 systemd[1]: Starting ensure-sysext.service... Sep 9 00:15:48.310388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:15:48.376032 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 00:15:48.376076 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 00:15:48.376422 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:15:48.376689 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:15:48.377797 systemd-tmpfiles[1332]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:15:48.378187 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Sep 9 00:15:48.378275 systemd-tmpfiles[1332]: ACLs are not supported, ignoring. Sep 9 00:15:48.418113 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:15:48.418134 systemd-tmpfiles[1332]: Skipping /boot Sep 9 00:15:48.419967 systemd[1]: Reload requested from client PID 1331 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:15:48.419990 systemd[1]: Reloading... Sep 9 00:15:48.442232 systemd-tmpfiles[1332]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:15:48.442265 systemd-tmpfiles[1332]: Skipping /boot Sep 9 00:15:48.445947 ldconfig[1238]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:15:48.491963 zram_generator::config[1361]: No configuration found. Sep 9 00:15:48.616621 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:15:48.698749 systemd[1]: Reloading finished in 278 ms. Sep 9 00:15:48.712855 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:15:48.735147 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:15:48.750712 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:15:48.760831 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:15:48.763630 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:15:48.786291 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:15:48.791189 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:15:48.799653 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:15:48.803159 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:15:48.808069 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:15:48.808261 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:15:48.814330 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:15:48.820419 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:15:48.833379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:15:48.836078 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:15:48.836229 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:15:48.838440 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:15:48.839556 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:15:48.850187 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:15:48.853244 systemd-udevd[1411]: Using default interface naming scheme 'v255'. Sep 9 00:15:48.866690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:15:48.867271 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:15:48.869303 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:15:48.869569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:15:48.871407 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:15:48.871635 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:15:48.881245 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:15:48.881564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:15:48.883642 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:15:48.886265 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:15:48.889185 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:15:48.890856 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:15:48.891057 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:15:48.893373 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:15:48.894584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:15:48.900439 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:15:48.914686 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:15:48.916607 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:15:48.916837 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:15:48.919564 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:15:48.919849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:15:48.923213 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:15:48.923477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:15:48.930161 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:15:48.951099 augenrules[1444]: No rules Sep 9 00:15:48.951641 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:15:48.951957 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:15:48.956076 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:15:48.987882 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:15:48.988315 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:15:48.991457 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:15:48.993964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:15:48.998042 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:15:49.000708 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:15:49.002118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:15:49.002165 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 00:15:49.008564 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:15:49.010054 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:15:49.011023 systemd[1]: Finished ensure-sysext.service. Sep 9 00:15:49.012466 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:15:49.023226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:15:49.025144 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:15:49.041791 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:15:49.046564 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:15:49.048773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:15:49.049978 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:15:49.051666 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:15:49.051912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:15:49.063174 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:15:49.063251 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:15:49.066952 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:15:49.073198 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:15:49.184815 systemd-resolved[1405]: Positive Trust Anchors: Sep 9 00:15:49.184834 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:15:49.184876 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:15:49.188571 systemd-resolved[1405]: Defaulting to hostname 'linux'. Sep 9 00:15:49.190694 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:15:49.197707 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:15:49.235546 systemd-networkd[1479]: lo: Link UP Sep 9 00:15:49.235559 systemd-networkd[1479]: lo: Gained carrier Sep 9 00:15:49.236435 systemd-networkd[1479]: Enumeration completed Sep 9 00:15:49.236530 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:15:49.237794 systemd[1]: Reached target network.target - Network. Sep 9 00:15:49.241549 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 00:15:49.245118 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:15:49.258754 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:15:49.264583 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:15:49.266020 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:15:49.267460 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:15:49.268762 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Sep 9 00:15:49.269981 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:15:49.271244 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:15:49.271280 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:15:49.272218 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:15:49.273564 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:15:49.274831 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:15:49.276115 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:15:49.278436 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:15:49.281315 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:15:49.285707 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 00:15:49.288311 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 00:15:49.289721 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 00:15:49.293946 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:15:49.294188 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:15:49.295974 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 00:15:49.298283 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 00:15:49.300116 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:15:49.302707 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:15:49.302717 systemd-networkd[1479]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:15:49.304664 systemd-networkd[1479]: eth0: Link UP Sep 9 00:15:49.305020 systemd-networkd[1479]: eth0: Gained carrier Sep 9 00:15:49.305094 systemd-networkd[1479]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:15:49.315110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:15:49.316946 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Sep 9 00:15:49.320383 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:15:49.320493 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:15:49.321590 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:15:49.324064 systemd-networkd[1479]: eth0: DHCPv4 address 10.0.0.42/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:15:49.324296 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:15:49.324328 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:15:49.325431 systemd-timesyncd[1492]: Network configuration changed, trying to establish connection. Sep 9 00:15:49.768237 systemd-resolved[1405]: Clock change detected. Flushing caches. Sep 9 00:15:49.768301 systemd-timesyncd[1492]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:15:49.768362 systemd-timesyncd[1492]: Initial clock synchronization to Tue 2025-09-09 00:15:49.768186 UTC. Sep 9 00:15:49.768613 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:15:49.774238 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:15:49.774273 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 00:15:49.774504 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:15:49.774681 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:15:49.771376 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:15:49.777265 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:15:49.787574 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:15:49.789943 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:15:49.791063 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:15:49.794904 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Sep 9 00:15:49.799104 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:15:49.801923 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:15:49.810907 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:15:49.816182 jq[1513]: false Sep 9 00:15:49.817024 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:15:49.858293 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:15:49.870296 extend-filesystems[1514]: Found /dev/vda6 Sep 9 00:15:49.873955 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:15:49.875864 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing passwd entry cache Sep 9 00:15:49.876111 extend-filesystems[1514]: Found /dev/vda9 Sep 9 00:15:49.876639 oslogin_cache_refresh[1515]: Refreshing passwd entry cache Sep 9 00:15:49.876915 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:15:49.878339 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:15:49.882156 extend-filesystems[1514]: Checking size of /dev/vda9 Sep 9 00:15:49.885195 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:15:49.888863 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:15:49.896316 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:15:49.898317 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:15:49.898574 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:15:49.898933 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:15:49.899193 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:15:49.902418 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:15:49.904752 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting users, quitting Sep 9 00:15:49.904752 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:15:49.904752 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Refreshing group entry cache Sep 9 00:15:49.903854 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:15:49.903608 oslogin_cache_refresh[1515]: Failure getting users, quitting Sep 9 00:15:49.911008 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Failure getting groups, quitting Sep 9 00:15:49.911008 google_oslogin_nss_cache[1515]: oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:15:49.903629 oslogin_cache_refresh[1515]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Sep 9 00:15:49.903691 oslogin_cache_refresh[1515]: Refreshing group entry cache Sep 9 00:15:49.909194 oslogin_cache_refresh[1515]: Failure getting groups, quitting Sep 9 00:15:49.909206 oslogin_cache_refresh[1515]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Sep 9 00:15:49.913538 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Sep 9 00:15:49.913868 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Sep 9 00:15:49.918596 jq[1543]: true Sep 9 00:15:49.925507 extend-filesystems[1514]: Resized partition /dev/vda9 Sep 9 00:15:49.934381 update_engine[1539]: I20250909 00:15:49.933961 1539 main.cc:92] Flatcar Update Engine starting Sep 9 00:15:49.951503 (ntainerd)[1558]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:15:49.966310 jq[1554]: true Sep 9 00:15:49.968118 extend-filesystems[1568]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 00:15:50.020286 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:15:50.052043 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:15:50.061321 dbus-daemon[1511]: [system] SELinux support is enabled Sep 9 00:15:50.061712 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:15:50.065300 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:15:50.066235 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:15:50.067558 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:15:50.067579 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:15:50.070760 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:15:50.075978 tar[1550]: linux-amd64/LICENSE Sep 9 00:15:50.076195 tar[1550]: linux-amd64/helm Sep 9 00:15:50.076703 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:15:50.082199 update_engine[1539]: I20250909 00:15:50.082149 1539 update_check_scheduler.cc:74] Next update check in 2m2s Sep 9 00:15:50.085947 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:15:50.163660 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:15:50.164040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:15:50.177691 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:15:50.253779 systemd-logind[1534]: New seat seat0. Sep 9 00:15:50.254948 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:15:50.289011 kernel: kvm_amd: TSC scaling supported Sep 9 00:15:50.289052 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:15:50.289065 kernel: kvm_amd: Nested Paging enabled Sep 9 00:15:50.289078 kernel: kvm_amd: LBR virtualization supported Sep 9 00:15:50.290078 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:15:50.290102 kernel: kvm_amd: Virtual GIF supported Sep 9 00:15:50.319779 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:15:50.322628 systemd-logind[1534]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:15:50.963714 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:15:50.963837 containerd[1558]: time="2025-09-09T00:15:50Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 00:15:50.339338 systemd-logind[1534]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 00:15:50.498145 locksmithd[1587]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:15:50.530435 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:15:50.965281 containerd[1558]: time="2025-09-09T00:15:50.964954155Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 9 00:15:50.966609 sshd_keygen[1551]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.974806471Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.713µs" Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.974854391Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.974889357Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.975094151Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.975110551Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.975141159Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.975209086Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.975223223Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.975565535Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.975583629Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.975667446Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 00:15:50.976787 containerd[1558]: time="2025-09-09T00:15:50.975679789Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 00:15:50.977137 containerd[1558]: time="2025-09-09T00:15:50.975806587Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 00:15:50.977137 containerd[1558]: time="2025-09-09T00:15:50.976065763Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:15:50.977137 containerd[1558]: time="2025-09-09T00:15:50.976099005Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 00:15:50.977137 containerd[1558]: time="2025-09-09T00:15:50.976108834Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 00:15:50.977137 containerd[1558]: time="2025-09-09T00:15:50.976165259Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 00:15:50.977137 containerd[1558]: time="2025-09-09T00:15:50.976384531Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 00:15:50.977137 containerd[1558]: time="2025-09-09T00:15:50.976451025Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:15:51.003631 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:15:51.008792 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:15:51.041357 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:15:51.041357 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:15:51.041357 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:15:51.053205 extend-filesystems[1514]: Resized filesystem in /dev/vda9 Sep 9 00:15:51.055402 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:15:51.127255 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:15:51.138541 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:15:51.139084 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:15:51.142927 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:15:51.176940 systemd-networkd[1479]: eth0: Gained IPv6LL Sep 9 00:15:51.180527 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:15:51.187281 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:15:51.191021 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:15:51.208215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:15:51.212922 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:15:51.215082 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:15:51.228310 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:15:51.231408 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:15:51.232887 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:15:51.254330 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:15:51.254656 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:15:51.274372 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:15:51.359064 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:15:51.429218 bash[1593]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:15:51.431262 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:15:51.440762 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:15:51.454718 tar[1550]: linux-amd64/README.md Sep 9 00:15:51.518791 containerd[1558]: time="2025-09-09T00:15:51.518512554Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 00:15:51.518919 containerd[1558]: time="2025-09-09T00:15:51.518888169Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 00:15:51.518997 containerd[1558]: time="2025-09-09T00:15:51.518981233Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 00:15:51.519066 containerd[1558]: time="2025-09-09T00:15:51.519049641Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 00:15:51.519159 containerd[1558]: time="2025-09-09T00:15:51.519117629Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 00:15:51.519159 containerd[1558]: time="2025-09-09T00:15:51.519136605Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 00:15:51.519159 containerd[1558]: time="2025-09-09T00:15:51.519154358Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 00:15:51.519159 containerd[1558]: time="2025-09-09T00:15:51.519168785Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 00:15:51.519379 containerd[1558]: time="2025-09-09T00:15:51.519188752Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 00:15:51.519379 containerd[1558]: time="2025-09-09T00:15:51.519206205Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 00:15:51.519379 containerd[1558]: time="2025-09-09T00:15:51.519217166Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 00:15:51.519379 containerd[1558]: time="2025-09-09T00:15:51.519247132Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 00:15:51.519475 containerd[1558]: time="2025-09-09T00:15:51.519449642Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 00:15:51.519510 containerd[1558]: time="2025-09-09T00:15:51.519476282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 00:15:51.519510 containerd[1558]: time="2025-09-09T00:15:51.519503202Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 00:15:51.519558 containerd[1558]: time="2025-09-09T00:15:51.519513782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 00:15:51.519558 containerd[1558]: time="2025-09-09T00:15:51.519525574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 00:15:51.519558 containerd[1558]: time="2025-09-09T00:15:51.519539741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 00:15:51.519558 containerd[1558]: time="2025-09-09T00:15:51.519557824Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 00:15:51.519667 containerd[1558]: time="2025-09-09T00:15:51.519575658Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 00:15:51.519667 containerd[1558]: time="2025-09-09T00:15:51.519587540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 00:15:51.519667 containerd[1558]: time="2025-09-09T00:15:51.519599643Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 00:15:51.519667 containerd[1558]: time="2025-09-09T00:15:51.519623417Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 00:15:51.519792 containerd[1558]: time="2025-09-09T00:15:51.519769171Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 00:15:51.519833 containerd[1558]: time="2025-09-09T00:15:51.519817662Z" level=info msg="Start snapshots syncer" Sep 9 00:15:51.519892 containerd[1558]: time="2025-09-09T00:15:51.519876853Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 00:15:51.520020 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:15:51.538529 containerd[1558]: time="2025-09-09T00:15:51.520310316Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 00:15:51.538529 containerd[1558]: time="2025-09-09T00:15:51.520385627Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520552660Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520706238Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520747365Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520758917Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520773454Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520783854Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520795556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520818719Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520851070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520862141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520871648Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520901104Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520933835Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 00:15:51.538778 containerd[1558]: time="2025-09-09T00:15:51.520942571Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:15:51.539229 containerd[1558]: time="2025-09-09T00:15:51.520951708Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 00:15:51.539229 containerd[1558]: time="2025-09-09T00:15:51.520963671Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 00:15:51.539229 containerd[1558]: time="2025-09-09T00:15:51.520976856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 00:15:51.539229 containerd[1558]: time="2025-09-09T00:15:51.520992585Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 00:15:51.539229 containerd[1558]: time="2025-09-09T00:15:51.521018113Z" level=info msg="runtime interface created" Sep 9 00:15:51.539229 containerd[1558]: time="2025-09-09T00:15:51.521025687Z" level=info msg="created NRI interface" Sep 9 00:15:51.539229 containerd[1558]: time="2025-09-09T00:15:51.521033131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 00:15:51.539229 containerd[1558]: time="2025-09-09T00:15:51.521046747Z" level=info msg="Connect containerd service" Sep 9 00:15:51.539229 containerd[1558]: time="2025-09-09T00:15:51.521075871Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:15:51.539229 containerd[1558]: time="2025-09-09T00:15:51.522205971Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:15:51.634166 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:15:51.639747 systemd[1]: Started sshd@0-10.0.0.42:22-10.0.0.1:56836.service - OpenSSH per-connection server daemon (10.0.0.1:56836). Sep 9 00:15:51.721683 containerd[1558]: time="2025-09-09T00:15:51.721627477Z" level=info msg="Start subscribing containerd event" Sep 9 00:15:51.721821 containerd[1558]: time="2025-09-09T00:15:51.721693681Z" level=info msg="Start recovering state" Sep 9 00:15:51.721879 containerd[1558]: time="2025-09-09T00:15:51.721842631Z" level=info msg="Start event monitor" Sep 9 00:15:51.721879 containerd[1558]: time="2025-09-09T00:15:51.721860013Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:15:51.721879 containerd[1558]: time="2025-09-09T00:15:51.721871084Z" level=info msg="Start streaming server" Sep 9 00:15:51.721935 containerd[1558]: time="2025-09-09T00:15:51.721887304Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 00:15:51.721935 containerd[1558]: time="2025-09-09T00:15:51.721895710Z" level=info msg="runtime interface starting up..." Sep 9 00:15:51.721935 containerd[1558]: time="2025-09-09T00:15:51.721907182Z" level=info msg="starting plugins..." Sep 9 00:15:51.721935 containerd[1558]: time="2025-09-09T00:15:51.721926147Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 00:15:51.722036 containerd[1558]: time="2025-09-09T00:15:51.722006779Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:15:51.722111 containerd[1558]: time="2025-09-09T00:15:51.722074956Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:15:51.722269 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:15:51.724001 containerd[1558]: time="2025-09-09T00:15:51.723955233Z" level=info msg="containerd successfully booted in 1.105524s" Sep 9 00:15:51.729773 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 56836 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:15:51.731773 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:15:51.738578 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:15:51.755689 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:15:51.765031 systemd-logind[1534]: New session 1 of user core. Sep 9 00:15:51.783472 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:15:51.801641 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:15:51.892415 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:15:51.895234 systemd-logind[1534]: New session c1 of user core. Sep 9 00:15:52.048219 systemd[1674]: Queued start job for default target default.target. Sep 9 00:15:52.067420 systemd[1674]: Created slice app.slice - User Application Slice. Sep 9 00:15:52.067457 systemd[1674]: Reached target paths.target - Paths. Sep 9 00:15:52.067519 systemd[1674]: Reached target timers.target - Timers. Sep 9 00:15:52.069556 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:15:52.094837 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:15:52.095033 systemd[1674]: Reached target sockets.target - Sockets. Sep 9 00:15:52.095116 systemd[1674]: Reached target basic.target - Basic System. Sep 9 00:15:52.095171 systemd[1674]: Reached target default.target - Main User Target. Sep 9 00:15:52.095232 systemd[1674]: Startup finished in 192ms. Sep 9 00:15:52.095414 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:15:52.098232 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:15:52.164251 systemd[1]: Started sshd@1-10.0.0.42:22-10.0.0.1:56850.service - OpenSSH per-connection server daemon (10.0.0.1:56850). Sep 9 00:15:52.234797 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 56850 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:15:52.237199 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:15:52.242341 systemd-logind[1534]: New session 2 of user core. Sep 9 00:15:52.254948 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:15:52.342391 sshd[1687]: Connection closed by 10.0.0.1 port 56850 Sep 9 00:15:52.343254 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Sep 9 00:15:52.354769 systemd[1]: sshd@1-10.0.0.42:22-10.0.0.1:56850.service: Deactivated successfully. Sep 9 00:15:52.358926 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:15:52.360020 systemd-logind[1534]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:15:52.364491 systemd[1]: Started sshd@2-10.0.0.42:22-10.0.0.1:56860.service - OpenSSH per-connection server daemon (10.0.0.1:56860). Sep 9 00:15:52.367168 systemd-logind[1534]: Removed session 2. Sep 9 00:15:52.430831 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 56860 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:15:52.432367 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:15:52.437443 systemd-logind[1534]: New session 3 of user core. Sep 9 00:15:52.447892 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:15:52.513065 sshd[1695]: Connection closed by 10.0.0.1 port 56860 Sep 9 00:15:52.513676 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Sep 9 00:15:52.518205 systemd[1]: sshd@2-10.0.0.42:22-10.0.0.1:56860.service: Deactivated successfully. Sep 9 00:15:52.521149 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:15:52.522115 systemd-logind[1534]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:15:52.525012 systemd-logind[1534]: Removed session 3. Sep 9 00:15:52.687264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:15:52.688959 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:15:52.690196 systemd[1]: Startup finished in 3.328s (kernel) + 8.961s (initrd) + 6.702s (userspace) = 18.992s. Sep 9 00:15:52.696168 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:15:53.277952 kubelet[1705]: E0909 00:15:53.277869 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:15:53.281951 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:15:53.282200 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:15:53.282626 systemd[1]: kubelet.service: Consumed 1.653s CPU time, 266.4M memory peak. Sep 9 00:16:02.532611 systemd[1]: Started sshd@3-10.0.0.42:22-10.0.0.1:38342.service - OpenSSH per-connection server daemon (10.0.0.1:38342). Sep 9 00:16:02.590771 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 38342 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:16:02.592468 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:16:02.597576 systemd-logind[1534]: New session 4 of user core. Sep 9 00:16:02.611901 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:16:02.666562 sshd[1721]: Connection closed by 10.0.0.1 port 38342 Sep 9 00:16:02.666970 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Sep 9 00:16:02.677512 systemd[1]: sshd@3-10.0.0.42:22-10.0.0.1:38342.service: Deactivated successfully. Sep 9 00:16:02.679490 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:16:02.680372 systemd-logind[1534]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:16:02.683620 systemd[1]: Started sshd@4-10.0.0.42:22-10.0.0.1:38352.service - OpenSSH per-connection server daemon (10.0.0.1:38352). Sep 9 00:16:02.684460 systemd-logind[1534]: Removed session 4. Sep 9 00:16:02.743818 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 38352 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:16:02.745465 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:16:02.750513 systemd-logind[1534]: New session 5 of user core. Sep 9 00:16:02.763867 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:16:02.814939 sshd[1729]: Connection closed by 10.0.0.1 port 38352 Sep 9 00:16:02.815239 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Sep 9 00:16:02.831370 systemd[1]: sshd@4-10.0.0.42:22-10.0.0.1:38352.service: Deactivated successfully. Sep 9 00:16:02.833482 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:16:02.834258 systemd-logind[1534]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:16:02.837798 systemd[1]: Started sshd@5-10.0.0.42:22-10.0.0.1:38354.service - OpenSSH per-connection server daemon (10.0.0.1:38354). Sep 9 00:16:02.838394 systemd-logind[1534]: Removed session 5. Sep 9 00:16:02.902343 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 38354 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:16:02.904388 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:16:02.910327 systemd-logind[1534]: New session 6 of user core. Sep 9 00:16:02.919972 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:16:02.976810 sshd[1737]: Connection closed by 10.0.0.1 port 38354 Sep 9 00:16:02.977234 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Sep 9 00:16:02.995250 systemd[1]: sshd@5-10.0.0.42:22-10.0.0.1:38354.service: Deactivated successfully. Sep 9 00:16:02.997310 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:16:02.998154 systemd-logind[1534]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:16:03.002206 systemd[1]: Started sshd@6-10.0.0.42:22-10.0.0.1:38362.service - OpenSSH per-connection server daemon (10.0.0.1:38362). Sep 9 00:16:03.002794 systemd-logind[1534]: Removed session 6. Sep 9 00:16:03.067439 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 38362 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:16:03.069137 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:16:03.074471 systemd-logind[1534]: New session 7 of user core. Sep 9 00:16:03.083914 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:16:03.145247 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:16:03.145584 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:16:03.165995 sudo[1746]: pam_unix(sudo:session): session closed for user root Sep 9 00:16:03.167771 sshd[1745]: Connection closed by 10.0.0.1 port 38362 Sep 9 00:16:03.168195 sshd-session[1743]: pam_unix(sshd:session): session closed for user core Sep 9 00:16:03.181241 systemd[1]: sshd@6-10.0.0.42:22-10.0.0.1:38362.service: Deactivated successfully. Sep 9 00:16:03.183026 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:16:03.183730 systemd-logind[1534]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:16:03.186876 systemd[1]: Started sshd@7-10.0.0.42:22-10.0.0.1:38368.service - OpenSSH per-connection server daemon (10.0.0.1:38368). Sep 9 00:16:03.187389 systemd-logind[1534]: Removed session 7. Sep 9 00:16:03.247269 sshd[1752]: Accepted publickey for core from 10.0.0.1 port 38368 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:16:03.249077 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:16:03.253847 systemd-logind[1534]: New session 8 of user core. Sep 9 00:16:03.267885 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:16:03.324094 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:16:03.324513 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:16:03.325667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:16:03.327770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:16:03.333207 sudo[1756]: pam_unix(sudo:session): session closed for user root Sep 9 00:16:03.339672 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 00:16:03.340020 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:16:03.361424 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 00:16:03.403036 augenrules[1781]: No rules Sep 9 00:16:03.404820 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:16:03.405107 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 00:16:03.406346 sudo[1755]: pam_unix(sudo:session): session closed for user root Sep 9 00:16:03.408319 sshd[1754]: Connection closed by 10.0.0.1 port 38368 Sep 9 00:16:03.408691 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Sep 9 00:16:03.423006 systemd[1]: sshd@7-10.0.0.42:22-10.0.0.1:38368.service: Deactivated successfully. Sep 9 00:16:03.425555 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:16:03.426486 systemd-logind[1534]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:16:03.430130 systemd[1]: Started sshd@8-10.0.0.42:22-10.0.0.1:38382.service - OpenSSH per-connection server daemon (10.0.0.1:38382). Sep 9 00:16:03.431016 systemd-logind[1534]: Removed session 8. Sep 9 00:16:03.487275 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 38382 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:16:03.489158 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:16:03.494064 systemd-logind[1534]: New session 9 of user core. Sep 9 00:16:03.503883 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:16:03.557764 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:16:03.558074 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:16:03.592160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:16:03.606185 (kubelet)[1803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:16:03.687231 kubelet[1803]: E0909 00:16:03.687147 1803 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:16:03.693899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:16:03.694123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:16:03.694570 systemd[1]: kubelet.service: Consumed 311ms CPU time, 110.6M memory peak. Sep 9 00:16:03.952634 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:16:03.973139 (dockerd)[1828]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:16:04.676355 dockerd[1828]: time="2025-09-09T00:16:04.676271359Z" level=info msg="Starting up" Sep 9 00:16:04.679411 dockerd[1828]: time="2025-09-09T00:16:04.679331167Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 00:16:05.521523 dockerd[1828]: time="2025-09-09T00:16:05.521432067Z" level=info msg="Loading containers: start." Sep 9 00:16:05.536771 kernel: Initializing XFRM netlink socket Sep 9 00:16:05.826025 systemd-networkd[1479]: docker0: Link UP Sep 9 00:16:05.903780 dockerd[1828]: time="2025-09-09T00:16:05.903547820Z" level=info msg="Loading containers: done." Sep 9 00:16:05.922620 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3333212217-merged.mount: Deactivated successfully. Sep 9 00:16:05.924802 dockerd[1828]: time="2025-09-09T00:16:05.924708294Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:16:05.924925 dockerd[1828]: time="2025-09-09T00:16:05.924889032Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 9 00:16:05.925115 dockerd[1828]: time="2025-09-09T00:16:05.925086793Z" level=info msg="Initializing buildkit" Sep 9 00:16:05.972881 dockerd[1828]: time="2025-09-09T00:16:05.972803269Z" level=info msg="Completed buildkit initialization" Sep 9 00:16:05.978832 dockerd[1828]: time="2025-09-09T00:16:05.978777923Z" level=info msg="Daemon has completed initialization" Sep 9 00:16:05.978963 dockerd[1828]: time="2025-09-09T00:16:05.978863744Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:16:05.979125 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:16:07.192044 containerd[1558]: time="2025-09-09T00:16:07.191968198Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 9 00:16:08.175679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2024163494.mount: Deactivated successfully. Sep 9 00:16:09.561216 containerd[1558]: time="2025-09-09T00:16:09.561140682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:09.562001 containerd[1558]: time="2025-09-09T00:16:09.561939280Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 9 00:16:09.568654 containerd[1558]: time="2025-09-09T00:16:09.568595643Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:09.578404 containerd[1558]: time="2025-09-09T00:16:09.578319919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:09.579428 containerd[1558]: time="2025-09-09T00:16:09.579389415Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.387356305s" Sep 9 00:16:09.579487 containerd[1558]: time="2025-09-09T00:16:09.579429440Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 9 00:16:09.580427 containerd[1558]: time="2025-09-09T00:16:09.580378059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 9 00:16:11.142391 containerd[1558]: time="2025-09-09T00:16:11.142309628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:11.143207 containerd[1558]: time="2025-09-09T00:16:11.143151678Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 9 00:16:11.144321 containerd[1558]: time="2025-09-09T00:16:11.144264615Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:11.147609 containerd[1558]: time="2025-09-09T00:16:11.147572067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:11.148661 containerd[1558]: time="2025-09-09T00:16:11.148632246Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 1.568217587s" Sep 9 00:16:11.148661 containerd[1558]: time="2025-09-09T00:16:11.148661641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 9 00:16:11.149250 containerd[1558]: time="2025-09-09T00:16:11.149223074Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 9 00:16:12.485720 containerd[1558]: time="2025-09-09T00:16:12.485608804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:12.486775 containerd[1558]: time="2025-09-09T00:16:12.486701032Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 9 00:16:12.488340 containerd[1558]: time="2025-09-09T00:16:12.488285934Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:12.493120 containerd[1558]: time="2025-09-09T00:16:12.493049247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:12.494679 containerd[1558]: time="2025-09-09T00:16:12.494618170Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 1.34535968s" Sep 9 00:16:12.494679 containerd[1558]: time="2025-09-09T00:16:12.494668214Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 9 00:16:12.495777 containerd[1558]: time="2025-09-09T00:16:12.495539818Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 9 00:16:13.757052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount291236616.mount: Deactivated successfully. Sep 9 00:16:13.758212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:16:13.759680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:16:14.401865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:16:14.416066 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:16:14.496295 kubelet[2118]: E0909 00:16:14.496210 2118 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:16:14.501637 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:16:14.501956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:16:14.505848 systemd[1]: kubelet.service: Consumed 399ms CPU time, 110.6M memory peak. Sep 9 00:16:14.691914 containerd[1558]: time="2025-09-09T00:16:14.691847519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:14.692796 containerd[1558]: time="2025-09-09T00:16:14.692729533Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 9 00:16:14.693719 containerd[1558]: time="2025-09-09T00:16:14.693687389Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:14.695782 containerd[1558]: time="2025-09-09T00:16:14.695698411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:14.696081 containerd[1558]: time="2025-09-09T00:16:14.696044971Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 2.200458886s" Sep 9 00:16:14.696081 containerd[1558]: time="2025-09-09T00:16:14.696074446Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 9 00:16:14.696574 containerd[1558]: time="2025-09-09T00:16:14.696542784Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 00:16:15.417529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3025524270.mount: Deactivated successfully. Sep 9 00:16:16.878181 containerd[1558]: time="2025-09-09T00:16:16.878090435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:16.879120 containerd[1558]: time="2025-09-09T00:16:16.879045125Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 9 00:16:16.884542 containerd[1558]: time="2025-09-09T00:16:16.884492120Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:16.890284 containerd[1558]: time="2025-09-09T00:16:16.890230211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:16.891142 containerd[1558]: time="2025-09-09T00:16:16.891093480Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.19451573s" Sep 9 00:16:16.891142 containerd[1558]: time="2025-09-09T00:16:16.891134467Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 00:16:16.891619 containerd[1558]: time="2025-09-09T00:16:16.891582207Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:16:17.480203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845040928.mount: Deactivated successfully. Sep 9 00:16:17.490858 containerd[1558]: time="2025-09-09T00:16:17.490795956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:16:17.491612 containerd[1558]: time="2025-09-09T00:16:17.491569938Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:16:17.496533 containerd[1558]: time="2025-09-09T00:16:17.496492840Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:16:17.498795 containerd[1558]: time="2025-09-09T00:16:17.498764881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:16:17.499543 containerd[1558]: time="2025-09-09T00:16:17.499508355Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 607.894279ms" Sep 9 00:16:17.499543 containerd[1558]: time="2025-09-09T00:16:17.499538933Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:16:17.500065 containerd[1558]: time="2025-09-09T00:16:17.500025405Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 9 00:16:18.283412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3933595856.mount: Deactivated successfully. Sep 9 00:16:20.315664 containerd[1558]: time="2025-09-09T00:16:20.315564987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:20.316296 containerd[1558]: time="2025-09-09T00:16:20.316254359Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 9 00:16:20.322072 containerd[1558]: time="2025-09-09T00:16:20.322001457Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:20.324834 containerd[1558]: time="2025-09-09T00:16:20.324796379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:20.325879 containerd[1558]: time="2025-09-09T00:16:20.325829286Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.825773915s" Sep 9 00:16:20.325879 containerd[1558]: time="2025-09-09T00:16:20.325865193Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 9 00:16:22.494134 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:16:22.494365 systemd[1]: kubelet.service: Consumed 399ms CPU time, 110.6M memory peak. Sep 9 00:16:22.496872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:16:22.521924 systemd[1]: Reload requested from client PID 2266 ('systemctl') (unit session-9.scope)... Sep 9 00:16:22.521986 systemd[1]: Reloading... Sep 9 00:16:22.645802 zram_generator::config[2313]: No configuration found. Sep 9 00:16:23.138420 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:16:23.258984 systemd[1]: Reloading finished in 736 ms. Sep 9 00:16:23.335904 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:16:23.336061 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:16:23.336508 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:16:23.336579 systemd[1]: kubelet.service: Consumed 162ms CPU time, 98.2M memory peak. Sep 9 00:16:23.338979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:16:23.529636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:16:23.549182 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:16:23.618755 kubelet[2358]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:16:23.618755 kubelet[2358]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:16:23.618755 kubelet[2358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:16:23.619271 kubelet[2358]: I0909 00:16:23.618842 2358 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:16:23.822391 kubelet[2358]: I0909 00:16:23.822272 2358 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:16:23.822391 kubelet[2358]: I0909 00:16:23.822300 2358 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:16:23.822544 kubelet[2358]: I0909 00:16:23.822530 2358 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:16:23.861627 kubelet[2358]: I0909 00:16:23.861576 2358 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:16:23.866847 kubelet[2358]: E0909 00:16:23.866802 2358 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:16:23.901663 kubelet[2358]: I0909 00:16:23.901615 2358 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:16:23.907097 kubelet[2358]: I0909 00:16:23.907063 2358 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:16:23.908892 kubelet[2358]: I0909 00:16:23.908841 2358 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:16:23.909082 kubelet[2358]: I0909 00:16:23.908882 2358 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:16:23.909243 kubelet[2358]: I0909 00:16:23.909093 2358 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:16:23.909243 kubelet[2358]: I0909 00:16:23.909102 2358 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:16:23.909288 kubelet[2358]: I0909 00:16:23.909265 2358 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:16:23.916451 kubelet[2358]: I0909 00:16:23.916424 2358 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:16:23.923366 kubelet[2358]: I0909 00:16:23.923336 2358 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:16:23.923416 kubelet[2358]: I0909 00:16:23.923375 2358 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:16:23.923416 kubelet[2358]: I0909 00:16:23.923393 2358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:16:23.929571 kubelet[2358]: I0909 00:16:23.929535 2358 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:16:23.930044 kubelet[2358]: I0909 00:16:23.930009 2358 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:16:23.930773 kubelet[2358]: W0909 00:16:23.930724 2358 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:16:23.934777 kubelet[2358]: I0909 00:16:23.934748 2358 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:16:23.934837 kubelet[2358]: I0909 00:16:23.934787 2358 server.go:1287] "Started kubelet" Sep 9 00:16:23.936937 kubelet[2358]: W0909 00:16:23.936863 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Sep 9 00:16:23.937001 kubelet[2358]: E0909 00:16:23.936939 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:16:23.937039 kubelet[2358]: W0909 00:16:23.937012 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Sep 9 00:16:23.937072 kubelet[2358]: E0909 00:16:23.937042 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:16:23.937128 kubelet[2358]: I0909 00:16:23.937098 2358 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:16:23.937636 kubelet[2358]: I0909 00:16:23.937605 2358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:16:23.938066 kubelet[2358]: I0909 00:16:23.938039 2358 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:16:23.940287 kubelet[2358]: I0909 00:16:23.940226 2358 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:16:23.940495 kubelet[2358]: I0909 00:16:23.940466 2358 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:16:23.942647 kubelet[2358]: I0909 00:16:23.942626 2358 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:16:23.948633 kubelet[2358]: E0909 00:16:23.948592 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:23.948762 kubelet[2358]: I0909 00:16:23.948647 2358 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:16:23.949095 kubelet[2358]: I0909 00:16:23.949064 2358 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:16:23.949248 kubelet[2358]: I0909 00:16:23.949223 2358 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:16:23.950485 kubelet[2358]: W0909 00:16:23.949903 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Sep 9 00:16:23.950485 kubelet[2358]: E0909 00:16:23.949968 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:16:23.950485 kubelet[2358]: E0909 00:16:23.950024 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="200ms" Sep 9 00:16:23.950889 kubelet[2358]: I0909 00:16:23.950856 2358 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:16:23.950983 kubelet[2358]: I0909 00:16:23.950961 2358 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:16:23.953892 kubelet[2358]: E0909 00:16:23.952589 2358 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.42:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.42:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863750e4fe11e3a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:16:23.934770746 +0000 UTC m=+0.380395911,LastTimestamp:2025-09-09 00:16:23.934770746 +0000 UTC m=+0.380395911,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:16:23.954828 kubelet[2358]: I0909 00:16:23.953985 2358 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:16:23.954828 kubelet[2358]: E0909 00:16:23.954467 2358 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:16:23.965532 kubelet[2358]: I0909 00:16:23.965466 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:16:23.967225 kubelet[2358]: I0909 00:16:23.967207 2358 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:16:23.967279 kubelet[2358]: I0909 00:16:23.967242 2358 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:16:23.967279 kubelet[2358]: I0909 00:16:23.967269 2358 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:16:23.967279 kubelet[2358]: I0909 00:16:23.967280 2358 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:16:23.967397 kubelet[2358]: E0909 00:16:23.967330 2358 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:16:23.969226 kubelet[2358]: I0909 00:16:23.969190 2358 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:16:23.969226 kubelet[2358]: I0909 00:16:23.969206 2358 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:16:23.969226 kubelet[2358]: I0909 00:16:23.969228 2358 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:16:23.971249 kubelet[2358]: W0909 00:16:23.971198 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Sep 9 00:16:23.971790 kubelet[2358]: E0909 00:16:23.971266 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:16:23.977249 kubelet[2358]: I0909 00:16:23.977226 2358 policy_none.go:49] "None policy: Start" Sep 9 00:16:23.977287 kubelet[2358]: I0909 00:16:23.977251 2358 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:16:23.977287 kubelet[2358]: I0909 00:16:23.977268 2358 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:16:23.994912 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:16:24.006513 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:16:24.009953 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:16:24.030139 kubelet[2358]: I0909 00:16:24.030097 2358 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:16:24.030577 kubelet[2358]: I0909 00:16:24.030556 2358 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:16:24.030651 kubelet[2358]: I0909 00:16:24.030576 2358 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:16:24.031000 kubelet[2358]: I0909 00:16:24.030982 2358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:16:24.031708 kubelet[2358]: E0909 00:16:24.031688 2358 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:16:24.031809 kubelet[2358]: E0909 00:16:24.031790 2358 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:16:24.077261 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 9 00:16:24.098712 kubelet[2358]: E0909 00:16:24.098649 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:16:24.102617 systemd[1]: Created slice kubepods-burstable-pod155f7dd0ff3602e4172a82b050c543ae.slice - libcontainer container kubepods-burstable-pod155f7dd0ff3602e4172a82b050c543ae.slice. Sep 9 00:16:24.104460 kubelet[2358]: E0909 00:16:24.104435 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:16:24.123325 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 9 00:16:24.125227 kubelet[2358]: E0909 00:16:24.125183 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:16:24.132367 kubelet[2358]: I0909 00:16:24.132324 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:16:24.132836 kubelet[2358]: E0909 00:16:24.132799 2358 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Sep 9 00:16:24.150443 kubelet[2358]: I0909 00:16:24.150393 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:24.150443 kubelet[2358]: I0909 00:16:24.150445 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:24.150443 kubelet[2358]: I0909 00:16:24.150475 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/155f7dd0ff3602e4172a82b050c543ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"155f7dd0ff3602e4172a82b050c543ae\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:16:24.150664 kubelet[2358]: E0909 00:16:24.150475 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="400ms" Sep 9 00:16:24.150664 kubelet[2358]: I0909 00:16:24.150498 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/155f7dd0ff3602e4172a82b050c543ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"155f7dd0ff3602e4172a82b050c543ae\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:16:24.150664 kubelet[2358]: I0909 00:16:24.150521 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:24.150664 kubelet[2358]: I0909 00:16:24.150542 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:16:24.150664 kubelet[2358]: I0909 00:16:24.150560 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/155f7dd0ff3602e4172a82b050c543ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"155f7dd0ff3602e4172a82b050c543ae\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:16:24.150835 kubelet[2358]: I0909 00:16:24.150588 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:24.150835 kubelet[2358]: I0909 00:16:24.150611 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:24.334857 kubelet[2358]: I0909 00:16:24.334694 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:16:24.335145 kubelet[2358]: E0909 00:16:24.335114 2358 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Sep 9 00:16:24.400154 kubelet[2358]: E0909 00:16:24.400113 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:24.400948 containerd[1558]: time="2025-09-09T00:16:24.400905823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 9 00:16:24.405152 kubelet[2358]: E0909 00:16:24.405123 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:24.405581 containerd[1558]: time="2025-09-09T00:16:24.405529521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:155f7dd0ff3602e4172a82b050c543ae,Namespace:kube-system,Attempt:0,}" Sep 9 00:16:24.426205 kubelet[2358]: E0909 00:16:24.426140 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:24.427339 containerd[1558]: time="2025-09-09T00:16:24.427186991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 9 00:16:24.551903 kubelet[2358]: E0909 00:16:24.551837 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.42:6443: connect: connection refused" interval="800ms" Sep 9 00:16:24.605503 containerd[1558]: time="2025-09-09T00:16:24.605117949Z" level=info msg="connecting to shim 43d4ae2d73abd1cb3fad1fa705a7afd70b8767d17c451adb904d308fbb199fc9" address="unix:///run/containerd/s/2c9f9ff7b75fedc74df367bb98598bb88a57f660c9038c21af801477a0e87822" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:16:24.607828 containerd[1558]: time="2025-09-09T00:16:24.607698826Z" level=info msg="connecting to shim d8b50e9b0198ea990c2bd73a6341f55d78ef3bad4ce410105c7268651df4c3d9" address="unix:///run/containerd/s/127f1a350098a0b1a0234bc970a918b43873d1fbda92d571e402dbc5ca1b04a6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:16:24.610505 containerd[1558]: time="2025-09-09T00:16:24.610453608Z" level=info msg="connecting to shim 39d1706fd3ce6d18e31798dddbc4903e3bd05f162ccc668cf20dcd5c0b23eb18" address="unix:///run/containerd/s/cfe0429b7df50a1d38c9615da2a634bd3c330af3389059c2d04bc9101667ea9b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:16:24.660070 systemd[1]: Started cri-containerd-43d4ae2d73abd1cb3fad1fa705a7afd70b8767d17c451adb904d308fbb199fc9.scope - libcontainer container 43d4ae2d73abd1cb3fad1fa705a7afd70b8767d17c451adb904d308fbb199fc9. Sep 9 00:16:24.664318 systemd[1]: Started cri-containerd-39d1706fd3ce6d18e31798dddbc4903e3bd05f162ccc668cf20dcd5c0b23eb18.scope - libcontainer container 39d1706fd3ce6d18e31798dddbc4903e3bd05f162ccc668cf20dcd5c0b23eb18. Sep 9 00:16:24.684043 systemd[1]: Started cri-containerd-d8b50e9b0198ea990c2bd73a6341f55d78ef3bad4ce410105c7268651df4c3d9.scope - libcontainer container d8b50e9b0198ea990c2bd73a6341f55d78ef3bad4ce410105c7268651df4c3d9. Sep 9 00:16:24.737816 kubelet[2358]: I0909 00:16:24.737774 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:16:24.738914 kubelet[2358]: E0909 00:16:24.738879 2358 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.42:6443/api/v1/nodes\": dial tcp 10.0.0.42:6443: connect: connection refused" node="localhost" Sep 9 00:16:24.754905 containerd[1558]: time="2025-09-09T00:16:24.754843648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"43d4ae2d73abd1cb3fad1fa705a7afd70b8767d17c451adb904d308fbb199fc9\"" Sep 9 00:16:24.755610 containerd[1558]: time="2025-09-09T00:16:24.755578421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:155f7dd0ff3602e4172a82b050c543ae,Namespace:kube-system,Attempt:0,} returns sandbox id \"39d1706fd3ce6d18e31798dddbc4903e3bd05f162ccc668cf20dcd5c0b23eb18\"" Sep 9 00:16:24.756304 kubelet[2358]: E0909 00:16:24.756241 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:24.756666 kubelet[2358]: E0909 00:16:24.756639 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:24.758638 containerd[1558]: time="2025-09-09T00:16:24.758599696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8b50e9b0198ea990c2bd73a6341f55d78ef3bad4ce410105c7268651df4c3d9\"" Sep 9 00:16:24.759280 kubelet[2358]: E0909 00:16:24.759184 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:24.759485 containerd[1558]: time="2025-09-09T00:16:24.759463187Z" level=info msg="CreateContainer within sandbox \"43d4ae2d73abd1cb3fad1fa705a7afd70b8767d17c451adb904d308fbb199fc9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:16:24.759680 containerd[1558]: time="2025-09-09T00:16:24.759651199Z" level=info msg="CreateContainer within sandbox \"39d1706fd3ce6d18e31798dddbc4903e3bd05f162ccc668cf20dcd5c0b23eb18\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:16:24.761209 containerd[1558]: time="2025-09-09T00:16:24.761169792Z" level=info msg="CreateContainer within sandbox \"d8b50e9b0198ea990c2bd73a6341f55d78ef3bad4ce410105c7268651df4c3d9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:16:24.781491 containerd[1558]: time="2025-09-09T00:16:24.781359096Z" level=info msg="Container e6dd9ee8eafd91e4703aab0949ac42363803621b86e0a665031f8a100b339540: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:16:24.783321 containerd[1558]: time="2025-09-09T00:16:24.783257119Z" level=info msg="Container 104bd90dee7c831ad918973d8ca003ecaebb44a6b8ba67b2f2b8c2ec6f4c120f: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:16:24.787281 containerd[1558]: time="2025-09-09T00:16:24.787219655Z" level=info msg="Container 553fef43d3154de9e1cbc3ef7be09c3eda1ec6dec332f14df1d00afac2d1ed38: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:16:24.802168 containerd[1558]: time="2025-09-09T00:16:24.802090243Z" level=info msg="CreateContainer within sandbox \"39d1706fd3ce6d18e31798dddbc4903e3bd05f162ccc668cf20dcd5c0b23eb18\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e6dd9ee8eafd91e4703aab0949ac42363803621b86e0a665031f8a100b339540\"" Sep 9 00:16:24.802551 containerd[1558]: time="2025-09-09T00:16:24.802501054Z" level=info msg="CreateContainer within sandbox \"43d4ae2d73abd1cb3fad1fa705a7afd70b8767d17c451adb904d308fbb199fc9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"104bd90dee7c831ad918973d8ca003ecaebb44a6b8ba67b2f2b8c2ec6f4c120f\"" Sep 9 00:16:24.803331 containerd[1558]: time="2025-09-09T00:16:24.803247711Z" level=info msg="StartContainer for \"104bd90dee7c831ad918973d8ca003ecaebb44a6b8ba67b2f2b8c2ec6f4c120f\"" Sep 9 00:16:24.803452 containerd[1558]: time="2025-09-09T00:16:24.803262540Z" level=info msg="StartContainer for \"e6dd9ee8eafd91e4703aab0949ac42363803621b86e0a665031f8a100b339540\"" Sep 9 00:16:24.804524 containerd[1558]: time="2025-09-09T00:16:24.804498569Z" level=info msg="connecting to shim 104bd90dee7c831ad918973d8ca003ecaebb44a6b8ba67b2f2b8c2ec6f4c120f" address="unix:///run/containerd/s/2c9f9ff7b75fedc74df367bb98598bb88a57f660c9038c21af801477a0e87822" protocol=ttrpc version=3 Sep 9 00:16:24.804674 containerd[1558]: time="2025-09-09T00:16:24.804645411Z" level=info msg="connecting to shim e6dd9ee8eafd91e4703aab0949ac42363803621b86e0a665031f8a100b339540" address="unix:///run/containerd/s/cfe0429b7df50a1d38c9615da2a634bd3c330af3389059c2d04bc9101667ea9b" protocol=ttrpc version=3 Sep 9 00:16:24.807138 containerd[1558]: time="2025-09-09T00:16:24.807092520Z" level=info msg="CreateContainer within sandbox \"d8b50e9b0198ea990c2bd73a6341f55d78ef3bad4ce410105c7268651df4c3d9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"553fef43d3154de9e1cbc3ef7be09c3eda1ec6dec332f14df1d00afac2d1ed38\"" Sep 9 00:16:24.808004 containerd[1558]: time="2025-09-09T00:16:24.807947637Z" level=info msg="StartContainer for \"553fef43d3154de9e1cbc3ef7be09c3eda1ec6dec332f14df1d00afac2d1ed38\"" Sep 9 00:16:24.809209 containerd[1558]: time="2025-09-09T00:16:24.809179707Z" level=info msg="connecting to shim 553fef43d3154de9e1cbc3ef7be09c3eda1ec6dec332f14df1d00afac2d1ed38" address="unix:///run/containerd/s/127f1a350098a0b1a0234bc970a918b43873d1fbda92d571e402dbc5ca1b04a6" protocol=ttrpc version=3 Sep 9 00:16:24.836908 systemd[1]: Started cri-containerd-104bd90dee7c831ad918973d8ca003ecaebb44a6b8ba67b2f2b8c2ec6f4c120f.scope - libcontainer container 104bd90dee7c831ad918973d8ca003ecaebb44a6b8ba67b2f2b8c2ec6f4c120f. Sep 9 00:16:24.838288 systemd[1]: Started cri-containerd-553fef43d3154de9e1cbc3ef7be09c3eda1ec6dec332f14df1d00afac2d1ed38.scope - libcontainer container 553fef43d3154de9e1cbc3ef7be09c3eda1ec6dec332f14df1d00afac2d1ed38. Sep 9 00:16:24.839511 systemd[1]: Started cri-containerd-e6dd9ee8eafd91e4703aab0949ac42363803621b86e0a665031f8a100b339540.scope - libcontainer container e6dd9ee8eafd91e4703aab0949ac42363803621b86e0a665031f8a100b339540. Sep 9 00:16:24.856861 kubelet[2358]: W0909 00:16:24.856665 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Sep 9 00:16:24.856861 kubelet[2358]: E0909 00:16:24.856768 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:16:24.888771 kubelet[2358]: W0909 00:16:24.888651 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Sep 9 00:16:24.890213 kubelet[2358]: E0909 00:16:24.889691 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:16:24.895318 kubelet[2358]: W0909 00:16:24.895218 2358 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.42:6443: connect: connection refused Sep 9 00:16:24.895318 kubelet[2358]: E0909 00:16:24.895289 2358 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.42:6443: connect: connection refused" logger="UnhandledError" Sep 9 00:16:24.920647 containerd[1558]: time="2025-09-09T00:16:24.920585158Z" level=info msg="StartContainer for \"e6dd9ee8eafd91e4703aab0949ac42363803621b86e0a665031f8a100b339540\" returns successfully" Sep 9 00:16:24.921457 containerd[1558]: time="2025-09-09T00:16:24.921428651Z" level=info msg="StartContainer for \"553fef43d3154de9e1cbc3ef7be09c3eda1ec6dec332f14df1d00afac2d1ed38\" returns successfully" Sep 9 00:16:24.922054 containerd[1558]: time="2025-09-09T00:16:24.921989059Z" level=info msg="StartContainer for \"104bd90dee7c831ad918973d8ca003ecaebb44a6b8ba67b2f2b8c2ec6f4c120f\" returns successfully" Sep 9 00:16:24.999638 kubelet[2358]: E0909 00:16:24.999563 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:16:25.002276 kubelet[2358]: E0909 00:16:25.002106 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:25.007813 kubelet[2358]: E0909 00:16:25.007775 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:16:25.008266 kubelet[2358]: E0909 00:16:25.008128 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:25.009081 kubelet[2358]: E0909 00:16:25.008952 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:16:25.009272 kubelet[2358]: E0909 00:16:25.009258 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:25.541871 kubelet[2358]: I0909 00:16:25.541814 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:16:26.008455 kubelet[2358]: E0909 00:16:26.008422 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:16:26.008938 kubelet[2358]: E0909 00:16:26.008480 2358 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:16:26.008938 kubelet[2358]: E0909 00:16:26.008536 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:26.008938 kubelet[2358]: E0909 00:16:26.008582 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:26.344203 kubelet[2358]: E0909 00:16:26.344059 2358 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:16:26.442788 kubelet[2358]: I0909 00:16:26.442628 2358 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:16:26.442788 kubelet[2358]: E0909 00:16:26.442663 2358 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:16:26.451849 kubelet[2358]: E0909 00:16:26.451785 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:26.552640 kubelet[2358]: E0909 00:16:26.552590 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:26.653328 kubelet[2358]: E0909 00:16:26.653178 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:26.754230 kubelet[2358]: E0909 00:16:26.754181 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:26.855070 kubelet[2358]: E0909 00:16:26.855009 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:26.955712 kubelet[2358]: E0909 00:16:26.955661 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:27.056169 kubelet[2358]: E0909 00:16:27.056112 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:27.157019 kubelet[2358]: E0909 00:16:27.156956 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:27.258008 kubelet[2358]: E0909 00:16:27.257849 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:27.359119 kubelet[2358]: E0909 00:16:27.359026 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:27.459166 kubelet[2358]: E0909 00:16:27.459110 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:27.559966 kubelet[2358]: E0909 00:16:27.559795 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:27.660460 kubelet[2358]: E0909 00:16:27.660395 2358 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:16:27.750784 kubelet[2358]: I0909 00:16:27.750684 2358 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:16:27.758724 kubelet[2358]: I0909 00:16:27.758672 2358 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:16:27.767126 kubelet[2358]: I0909 00:16:27.767056 2358 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:27.931349 kubelet[2358]: I0909 00:16:27.931288 2358 apiserver.go:52] "Watching apiserver" Sep 9 00:16:27.933497 kubelet[2358]: E0909 00:16:27.933420 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:27.933653 kubelet[2358]: E0909 00:16:27.933627 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:27.933960 kubelet[2358]: E0909 00:16:27.933924 2358 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:27.950233 kubelet[2358]: I0909 00:16:27.950186 2358 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:16:28.498919 systemd[1]: Reload requested from client PID 2631 ('systemctl') (unit session-9.scope)... Sep 9 00:16:28.498937 systemd[1]: Reloading... Sep 9 00:16:28.592762 zram_generator::config[2674]: No configuration found. Sep 9 00:16:28.705249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:16:28.848616 systemd[1]: Reloading finished in 349 ms. Sep 9 00:16:28.884470 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:16:28.910370 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:16:28.910765 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:16:28.910832 systemd[1]: kubelet.service: Consumed 761ms CPU time, 131.8M memory peak. Sep 9 00:16:28.913272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:16:29.177880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:16:29.192176 (kubelet)[2719]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:16:29.281387 kubelet[2719]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:16:29.281387 kubelet[2719]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:16:29.281387 kubelet[2719]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:16:29.281872 kubelet[2719]: I0909 00:16:29.281445 2719 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:16:29.287679 kubelet[2719]: I0909 00:16:29.287645 2719 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 9 00:16:29.287679 kubelet[2719]: I0909 00:16:29.287667 2719 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:16:29.287933 kubelet[2719]: I0909 00:16:29.287882 2719 server.go:954] "Client rotation is on, will bootstrap in background" Sep 9 00:16:29.289040 kubelet[2719]: I0909 00:16:29.289010 2719 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 00:16:29.291071 kubelet[2719]: I0909 00:16:29.291037 2719 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:16:29.295930 kubelet[2719]: I0909 00:16:29.295901 2719 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 00:16:29.300439 kubelet[2719]: I0909 00:16:29.300376 2719 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:16:29.300668 kubelet[2719]: I0909 00:16:29.300631 2719 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:16:29.300863 kubelet[2719]: I0909 00:16:29.300667 2719 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:16:29.300984 kubelet[2719]: I0909 00:16:29.300866 2719 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:16:29.300984 kubelet[2719]: I0909 00:16:29.300875 2719 container_manager_linux.go:304] "Creating device plugin manager" Sep 9 00:16:29.300984 kubelet[2719]: I0909 00:16:29.300927 2719 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:16:29.301872 kubelet[2719]: I0909 00:16:29.301858 2719 kubelet.go:446] "Attempting to sync node with API server" Sep 9 00:16:29.301945 kubelet[2719]: I0909 00:16:29.301933 2719 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:16:29.304777 kubelet[2719]: I0909 00:16:29.304749 2719 kubelet.go:352] "Adding apiserver pod source" Sep 9 00:16:29.304777 kubelet[2719]: I0909 00:16:29.304775 2719 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:16:29.306454 kubelet[2719]: I0909 00:16:29.306352 2719 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 9 00:16:29.306708 kubelet[2719]: I0909 00:16:29.306691 2719 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 00:16:29.307156 kubelet[2719]: I0909 00:16:29.307138 2719 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:16:29.307189 kubelet[2719]: I0909 00:16:29.307183 2719 server.go:1287] "Started kubelet" Sep 9 00:16:29.308873 kubelet[2719]: I0909 00:16:29.308820 2719 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:16:29.309174 kubelet[2719]: I0909 00:16:29.309156 2719 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:16:29.309232 kubelet[2719]: I0909 00:16:29.309212 2719 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:16:29.311707 kubelet[2719]: I0909 00:16:29.311335 2719 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:16:29.312758 kubelet[2719]: I0909 00:16:29.312461 2719 server.go:479] "Adding debug handlers to kubelet server" Sep 9 00:16:29.312876 kubelet[2719]: I0909 00:16:29.312853 2719 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:16:29.314950 kubelet[2719]: I0909 00:16:29.314926 2719 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:16:29.315700 kubelet[2719]: I0909 00:16:29.315678 2719 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:16:29.315929 kubelet[2719]: I0909 00:16:29.315910 2719 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:16:29.316988 kubelet[2719]: I0909 00:16:29.316968 2719 factory.go:221] Registration of the systemd container factory successfully Sep 9 00:16:29.317076 kubelet[2719]: I0909 00:16:29.317050 2719 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:16:29.320031 kubelet[2719]: E0909 00:16:29.319980 2719 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:16:29.322533 kubelet[2719]: I0909 00:16:29.322508 2719 factory.go:221] Registration of the containerd container factory successfully Sep 9 00:16:29.331034 kubelet[2719]: I0909 00:16:29.330977 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 00:16:29.332380 kubelet[2719]: I0909 00:16:29.332364 2719 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 00:16:29.332752 kubelet[2719]: I0909 00:16:29.332632 2719 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 9 00:16:29.332752 kubelet[2719]: I0909 00:16:29.332659 2719 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:16:29.332752 kubelet[2719]: I0909 00:16:29.332666 2719 kubelet.go:2382] "Starting kubelet main sync loop" Sep 9 00:16:29.332892 kubelet[2719]: E0909 00:16:29.332716 2719 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:16:29.357330 kubelet[2719]: I0909 00:16:29.357292 2719 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:16:29.357330 kubelet[2719]: I0909 00:16:29.357308 2719 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:16:29.357330 kubelet[2719]: I0909 00:16:29.357326 2719 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:16:29.357513 kubelet[2719]: I0909 00:16:29.357487 2719 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:16:29.357513 kubelet[2719]: I0909 00:16:29.357497 2719 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:16:29.357554 kubelet[2719]: I0909 00:16:29.357519 2719 policy_none.go:49] "None policy: Start" Sep 9 00:16:29.357554 kubelet[2719]: I0909 00:16:29.357533 2719 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:16:29.357554 kubelet[2719]: I0909 00:16:29.357542 2719 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:16:29.357652 kubelet[2719]: I0909 00:16:29.357639 2719 state_mem.go:75] "Updated machine memory state" Sep 9 00:16:29.362885 kubelet[2719]: I0909 00:16:29.362759 2719 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 00:16:29.362972 kubelet[2719]: I0909 00:16:29.362941 2719 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:16:29.362972 kubelet[2719]: I0909 00:16:29.362951 2719 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:16:29.363160 kubelet[2719]: I0909 00:16:29.363135 2719 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:16:29.364336 kubelet[2719]: E0909 00:16:29.364099 2719 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:16:29.433693 kubelet[2719]: I0909 00:16:29.433573 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:29.433837 kubelet[2719]: I0909 00:16:29.433798 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:16:29.433837 kubelet[2719]: I0909 00:16:29.433573 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:16:29.439283 kubelet[2719]: E0909 00:16:29.439211 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:16:29.439817 kubelet[2719]: E0909 00:16:29.439774 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:29.439897 kubelet[2719]: E0909 00:16:29.439821 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:16:29.475374 kubelet[2719]: I0909 00:16:29.475335 2719 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:16:29.483868 kubelet[2719]: I0909 00:16:29.483752 2719 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:16:29.483868 kubelet[2719]: I0909 00:16:29.483820 2719 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:16:29.616854 kubelet[2719]: I0909 00:16:29.616797 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:29.616854 kubelet[2719]: I0909 00:16:29.616839 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:29.617039 kubelet[2719]: I0909 00:16:29.616889 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:16:29.617039 kubelet[2719]: I0909 00:16:29.616945 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/155f7dd0ff3602e4172a82b050c543ae-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"155f7dd0ff3602e4172a82b050c543ae\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:16:29.617039 kubelet[2719]: I0909 00:16:29.616973 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/155f7dd0ff3602e4172a82b050c543ae-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"155f7dd0ff3602e4172a82b050c543ae\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:16:29.617039 kubelet[2719]: I0909 00:16:29.616996 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:29.617039 kubelet[2719]: I0909 00:16:29.617018 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:29.617162 kubelet[2719]: I0909 00:16:29.617039 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:16:29.617162 kubelet[2719]: I0909 00:16:29.617056 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/155f7dd0ff3602e4172a82b050c543ae-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"155f7dd0ff3602e4172a82b050c543ae\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:16:29.740778 kubelet[2719]: E0909 00:16:29.740508 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:29.740778 kubelet[2719]: E0909 00:16:29.740549 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:29.740778 kubelet[2719]: E0909 00:16:29.740639 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:30.305318 kubelet[2719]: I0909 00:16:30.305256 2719 apiserver.go:52] "Watching apiserver" Sep 9 00:16:30.316674 kubelet[2719]: I0909 00:16:30.316635 2719 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:16:30.346464 kubelet[2719]: E0909 00:16:30.346420 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:30.347180 kubelet[2719]: E0909 00:16:30.347150 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:30.347265 kubelet[2719]: I0909 00:16:30.347231 2719 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:16:30.354778 kubelet[2719]: E0909 00:16:30.354758 2719 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:16:30.354970 kubelet[2719]: E0909 00:16:30.354956 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:30.375717 kubelet[2719]: I0909 00:16:30.375649 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.375625712 podStartE2EDuration="3.375625712s" podCreationTimestamp="2025-09-09 00:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:16:30.375442511 +0000 UTC m=+1.178426622" watchObservedRunningTime="2025-09-09 00:16:30.375625712 +0000 UTC m=+1.178609823" Sep 9 00:16:30.375935 kubelet[2719]: I0909 00:16:30.375797 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.375789534 podStartE2EDuration="3.375789534s" podCreationTimestamp="2025-09-09 00:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:16:30.367613258 +0000 UTC m=+1.170597359" watchObservedRunningTime="2025-09-09 00:16:30.375789534 +0000 UTC m=+1.178773645" Sep 9 00:16:30.391343 kubelet[2719]: I0909 00:16:30.391257 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.391236899 podStartE2EDuration="3.391236899s" podCreationTimestamp="2025-09-09 00:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:16:30.383721304 +0000 UTC m=+1.186705415" watchObservedRunningTime="2025-09-09 00:16:30.391236899 +0000 UTC m=+1.194221010" Sep 9 00:16:31.347983 kubelet[2719]: E0909 00:16:31.347762 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:31.347983 kubelet[2719]: E0909 00:16:31.347882 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:31.347983 kubelet[2719]: E0909 00:16:31.347903 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:32.349119 kubelet[2719]: E0909 00:16:32.349079 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:33.692039 kubelet[2719]: I0909 00:16:33.691943 2719 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:16:33.692981 containerd[1558]: time="2025-09-09T00:16:33.692931829Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:16:33.693728 kubelet[2719]: I0909 00:16:33.693550 2719 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:16:34.399197 systemd[1]: Created slice kubepods-besteffort-pod7b8fac58_6574_4fe4_8e85_222b85e6b067.slice - libcontainer container kubepods-besteffort-pod7b8fac58_6574_4fe4_8e85_222b85e6b067.slice. Sep 9 00:16:34.448719 kubelet[2719]: I0909 00:16:34.448656 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7b8fac58-6574-4fe4-8e85-222b85e6b067-kube-proxy\") pod \"kube-proxy-pjjbc\" (UID: \"7b8fac58-6574-4fe4-8e85-222b85e6b067\") " pod="kube-system/kube-proxy-pjjbc" Sep 9 00:16:34.448719 kubelet[2719]: I0909 00:16:34.448708 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b8fac58-6574-4fe4-8e85-222b85e6b067-lib-modules\") pod \"kube-proxy-pjjbc\" (UID: \"7b8fac58-6574-4fe4-8e85-222b85e6b067\") " pod="kube-system/kube-proxy-pjjbc" Sep 9 00:16:34.448719 kubelet[2719]: I0909 00:16:34.448722 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b8fac58-6574-4fe4-8e85-222b85e6b067-xtables-lock\") pod \"kube-proxy-pjjbc\" (UID: \"7b8fac58-6574-4fe4-8e85-222b85e6b067\") " pod="kube-system/kube-proxy-pjjbc" Sep 9 00:16:34.448959 kubelet[2719]: I0909 00:16:34.448755 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn92d\" (UniqueName: \"kubernetes.io/projected/7b8fac58-6574-4fe4-8e85-222b85e6b067-kube-api-access-jn92d\") pod \"kube-proxy-pjjbc\" (UID: \"7b8fac58-6574-4fe4-8e85-222b85e6b067\") " pod="kube-system/kube-proxy-pjjbc" Sep 9 00:16:34.554302 kubelet[2719]: E0909 00:16:34.554259 2719 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 9 00:16:34.554302 kubelet[2719]: E0909 00:16:34.554296 2719 projected.go:194] Error preparing data for projected volume kube-api-access-jn92d for pod kube-system/kube-proxy-pjjbc: configmap "kube-root-ca.crt" not found Sep 9 00:16:34.554498 kubelet[2719]: E0909 00:16:34.554379 2719 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7b8fac58-6574-4fe4-8e85-222b85e6b067-kube-api-access-jn92d podName:7b8fac58-6574-4fe4-8e85-222b85e6b067 nodeName:}" failed. No retries permitted until 2025-09-09 00:16:35.054337266 +0000 UTC m=+5.857321367 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jn92d" (UniqueName: "kubernetes.io/projected/7b8fac58-6574-4fe4-8e85-222b85e6b067-kube-api-access-jn92d") pod "kube-proxy-pjjbc" (UID: "7b8fac58-6574-4fe4-8e85-222b85e6b067") : configmap "kube-root-ca.crt" not found Sep 9 00:16:34.811635 systemd[1]: Created slice kubepods-besteffort-pod67fabe15_2c1c_4c07_901a_078108207db3.slice - libcontainer container kubepods-besteffort-pod67fabe15_2c1c_4c07_901a_078108207db3.slice. Sep 9 00:16:34.852307 kubelet[2719]: I0909 00:16:34.852262 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsxw9\" (UniqueName: \"kubernetes.io/projected/67fabe15-2c1c-4c07-901a-078108207db3-kube-api-access-vsxw9\") pod \"tigera-operator-755d956888-hwjkz\" (UID: \"67fabe15-2c1c-4c07-901a-078108207db3\") " pod="tigera-operator/tigera-operator-755d956888-hwjkz" Sep 9 00:16:34.852307 kubelet[2719]: I0909 00:16:34.852304 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/67fabe15-2c1c-4c07-901a-078108207db3-var-lib-calico\") pod \"tigera-operator-755d956888-hwjkz\" (UID: \"67fabe15-2c1c-4c07-901a-078108207db3\") " pod="tigera-operator/tigera-operator-755d956888-hwjkz" Sep 9 00:16:35.117106 containerd[1558]: time="2025-09-09T00:16:35.116425447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-hwjkz,Uid:67fabe15-2c1c-4c07-901a-078108207db3,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:16:35.140795 containerd[1558]: time="2025-09-09T00:16:35.140753078Z" level=info msg="connecting to shim ef221aad9e569fb15193806e4c23b0853851007ddaeac0ed83fd1e9c4a14fcee" address="unix:///run/containerd/s/2482d70a9b3be149ec28ddb564b9bd6735f3342fc29076dabc4f2d3eb5061685" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:16:35.185953 systemd[1]: Started cri-containerd-ef221aad9e569fb15193806e4c23b0853851007ddaeac0ed83fd1e9c4a14fcee.scope - libcontainer container ef221aad9e569fb15193806e4c23b0853851007ddaeac0ed83fd1e9c4a14fcee. Sep 9 00:16:35.232314 containerd[1558]: time="2025-09-09T00:16:35.232261229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-hwjkz,Uid:67fabe15-2c1c-4c07-901a-078108207db3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ef221aad9e569fb15193806e4c23b0853851007ddaeac0ed83fd1e9c4a14fcee\"" Sep 9 00:16:35.234351 containerd[1558]: time="2025-09-09T00:16:35.234302506Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:16:35.311181 kubelet[2719]: E0909 00:16:35.311113 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:35.311783 containerd[1558]: time="2025-09-09T00:16:35.311719481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjjbc,Uid:7b8fac58-6574-4fe4-8e85-222b85e6b067,Namespace:kube-system,Attempt:0,}" Sep 9 00:16:35.336450 containerd[1558]: time="2025-09-09T00:16:35.336402227Z" level=info msg="connecting to shim 9eb6a0beaf470ebe0883f23f8a6520acc466504a047be423d2fe423808c0f4bc" address="unix:///run/containerd/s/4ba8efa7ed993434978060b58c9ac4151259a2f8fd4522229fe2b3086cff6d75" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:16:35.369975 systemd[1]: Started cri-containerd-9eb6a0beaf470ebe0883f23f8a6520acc466504a047be423d2fe423808c0f4bc.scope - libcontainer container 9eb6a0beaf470ebe0883f23f8a6520acc466504a047be423d2fe423808c0f4bc. Sep 9 00:16:35.406204 containerd[1558]: time="2025-09-09T00:16:35.406141378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjjbc,Uid:7b8fac58-6574-4fe4-8e85-222b85e6b067,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eb6a0beaf470ebe0883f23f8a6520acc466504a047be423d2fe423808c0f4bc\"" Sep 9 00:16:35.407287 kubelet[2719]: E0909 00:16:35.407249 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:35.410197 containerd[1558]: time="2025-09-09T00:16:35.410148399Z" level=info msg="CreateContainer within sandbox \"9eb6a0beaf470ebe0883f23f8a6520acc466504a047be423d2fe423808c0f4bc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:16:35.446120 kubelet[2719]: E0909 00:16:35.446087 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:35.461383 containerd[1558]: time="2025-09-09T00:16:35.461328016Z" level=info msg="Container 4e59eb8e80b23f6f2903ef28440c49b387bca3f73eedf4aa452c09c9b916ba34: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:16:35.478885 containerd[1558]: time="2025-09-09T00:16:35.478820073Z" level=info msg="CreateContainer within sandbox \"9eb6a0beaf470ebe0883f23f8a6520acc466504a047be423d2fe423808c0f4bc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4e59eb8e80b23f6f2903ef28440c49b387bca3f73eedf4aa452c09c9b916ba34\"" Sep 9 00:16:35.479586 containerd[1558]: time="2025-09-09T00:16:35.479527366Z" level=info msg="StartContainer for \"4e59eb8e80b23f6f2903ef28440c49b387bca3f73eedf4aa452c09c9b916ba34\"" Sep 9 00:16:35.481002 containerd[1558]: time="2025-09-09T00:16:35.480967773Z" level=info msg="connecting to shim 4e59eb8e80b23f6f2903ef28440c49b387bca3f73eedf4aa452c09c9b916ba34" address="unix:///run/containerd/s/4ba8efa7ed993434978060b58c9ac4151259a2f8fd4522229fe2b3086cff6d75" protocol=ttrpc version=3 Sep 9 00:16:35.509940 systemd[1]: Started cri-containerd-4e59eb8e80b23f6f2903ef28440c49b387bca3f73eedf4aa452c09c9b916ba34.scope - libcontainer container 4e59eb8e80b23f6f2903ef28440c49b387bca3f73eedf4aa452c09c9b916ba34. Sep 9 00:16:35.562359 containerd[1558]: time="2025-09-09T00:16:35.562320474Z" level=info msg="StartContainer for \"4e59eb8e80b23f6f2903ef28440c49b387bca3f73eedf4aa452c09c9b916ba34\" returns successfully" Sep 9 00:16:35.697935 update_engine[1539]: I20250909 00:16:35.697849 1539 update_attempter.cc:509] Updating boot flags... Sep 9 00:16:36.359550 kubelet[2719]: E0909 00:16:36.359505 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:36.360976 kubelet[2719]: E0909 00:16:36.359605 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:36.370557 kubelet[2719]: I0909 00:16:36.370479 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pjjbc" podStartSLOduration=2.3704534219999998 podStartE2EDuration="2.370453422s" podCreationTimestamp="2025-09-09 00:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:16:36.370271567 +0000 UTC m=+7.173255689" watchObservedRunningTime="2025-09-09 00:16:36.370453422 +0000 UTC m=+7.173437544" Sep 9 00:16:36.885188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535835627.mount: Deactivated successfully. Sep 9 00:16:37.360590 kubelet[2719]: E0909 00:16:37.360556 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:40.032758 containerd[1558]: time="2025-09-09T00:16:40.032677732Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:40.033497 containerd[1558]: time="2025-09-09T00:16:40.033425517Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 00:16:40.034649 containerd[1558]: time="2025-09-09T00:16:40.034578380Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:40.036411 containerd[1558]: time="2025-09-09T00:16:40.036383346Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:40.036944 containerd[1558]: time="2025-09-09T00:16:40.036902849Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 4.802562101s" Sep 9 00:16:40.036944 containerd[1558]: time="2025-09-09T00:16:40.036936173Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 00:16:40.040652 containerd[1558]: time="2025-09-09T00:16:40.040613854Z" level=info msg="CreateContainer within sandbox \"ef221aad9e569fb15193806e4c23b0853851007ddaeac0ed83fd1e9c4a14fcee\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:16:40.049380 containerd[1558]: time="2025-09-09T00:16:40.049327558Z" level=info msg="Container 2b9f8f0ce15a0ebd0afce70dc484352e7c65be3b84e3859bf9fc74cc7f126a64: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:16:40.052964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3464580974.mount: Deactivated successfully. Sep 9 00:16:40.056429 containerd[1558]: time="2025-09-09T00:16:40.056397891Z" level=info msg="CreateContainer within sandbox \"ef221aad9e569fb15193806e4c23b0853851007ddaeac0ed83fd1e9c4a14fcee\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2b9f8f0ce15a0ebd0afce70dc484352e7c65be3b84e3859bf9fc74cc7f126a64\"" Sep 9 00:16:40.056904 containerd[1558]: time="2025-09-09T00:16:40.056876758Z" level=info msg="StartContainer for \"2b9f8f0ce15a0ebd0afce70dc484352e7c65be3b84e3859bf9fc74cc7f126a64\"" Sep 9 00:16:40.057704 containerd[1558]: time="2025-09-09T00:16:40.057680459Z" level=info msg="connecting to shim 2b9f8f0ce15a0ebd0afce70dc484352e7c65be3b84e3859bf9fc74cc7f126a64" address="unix:///run/containerd/s/2482d70a9b3be149ec28ddb564b9bd6735f3342fc29076dabc4f2d3eb5061685" protocol=ttrpc version=3 Sep 9 00:16:40.106874 systemd[1]: Started cri-containerd-2b9f8f0ce15a0ebd0afce70dc484352e7c65be3b84e3859bf9fc74cc7f126a64.scope - libcontainer container 2b9f8f0ce15a0ebd0afce70dc484352e7c65be3b84e3859bf9fc74cc7f126a64. Sep 9 00:16:40.139184 containerd[1558]: time="2025-09-09T00:16:40.139129971Z" level=info msg="StartContainer for \"2b9f8f0ce15a0ebd0afce70dc484352e7c65be3b84e3859bf9fc74cc7f126a64\" returns successfully" Sep 9 00:16:40.350102 kubelet[2719]: E0909 00:16:40.349957 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:41.732938 kubelet[2719]: E0909 00:16:41.732878 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:41.744710 kubelet[2719]: I0909 00:16:41.744640 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-hwjkz" podStartSLOduration=2.940785495 podStartE2EDuration="7.744613374s" podCreationTimestamp="2025-09-09 00:16:34 +0000 UTC" firstStartedPulling="2025-09-09 00:16:35.233870877 +0000 UTC m=+6.036854988" lastFinishedPulling="2025-09-09 00:16:40.037698756 +0000 UTC m=+10.840682867" observedRunningTime="2025-09-09 00:16:40.380769571 +0000 UTC m=+11.183753692" watchObservedRunningTime="2025-09-09 00:16:41.744613374 +0000 UTC m=+12.547597485" Sep 9 00:16:45.598073 sudo[1795]: pam_unix(sudo:session): session closed for user root Sep 9 00:16:45.599550 sshd[1792]: Connection closed by 10.0.0.1 port 38382 Sep 9 00:16:45.601221 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Sep 9 00:16:45.610176 systemd[1]: sshd@8-10.0.0.42:22-10.0.0.1:38382.service: Deactivated successfully. Sep 9 00:16:45.615940 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:16:45.616274 systemd[1]: session-9.scope: Consumed 4.845s CPU time, 225.6M memory peak. Sep 9 00:16:45.618964 systemd-logind[1534]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:16:45.621181 systemd-logind[1534]: Removed session 9. Sep 9 00:16:48.336328 systemd[1]: Created slice kubepods-besteffort-podcf8b113f_fa0b_4312_b127_a5af51b22642.slice - libcontainer container kubepods-besteffort-podcf8b113f_fa0b_4312_b127_a5af51b22642.slice. Sep 9 00:16:48.339601 kubelet[2719]: I0909 00:16:48.339424 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf8b113f-fa0b-4312-b127-a5af51b22642-tigera-ca-bundle\") pod \"calico-typha-5447b697b-x7z4q\" (UID: \"cf8b113f-fa0b-4312-b127-a5af51b22642\") " pod="calico-system/calico-typha-5447b697b-x7z4q" Sep 9 00:16:48.339601 kubelet[2719]: I0909 00:16:48.339457 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj9dq\" (UniqueName: \"kubernetes.io/projected/cf8b113f-fa0b-4312-b127-a5af51b22642-kube-api-access-jj9dq\") pod \"calico-typha-5447b697b-x7z4q\" (UID: \"cf8b113f-fa0b-4312-b127-a5af51b22642\") " pod="calico-system/calico-typha-5447b697b-x7z4q" Sep 9 00:16:48.339601 kubelet[2719]: I0909 00:16:48.339473 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/cf8b113f-fa0b-4312-b127-a5af51b22642-typha-certs\") pod \"calico-typha-5447b697b-x7z4q\" (UID: \"cf8b113f-fa0b-4312-b127-a5af51b22642\") " pod="calico-system/calico-typha-5447b697b-x7z4q" Sep 9 00:16:48.620798 systemd[1]: Created slice kubepods-besteffort-pode936d231_b45d_4a81_a1e8_1f83e58b1dca.slice - libcontainer container kubepods-besteffort-pode936d231_b45d_4a81_a1e8_1f83e58b1dca.slice. Sep 9 00:16:48.641260 kubelet[2719]: I0909 00:16:48.641183 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e936d231-b45d-4a81-a1e8-1f83e58b1dca-cni-bin-dir\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.641260 kubelet[2719]: I0909 00:16:48.641242 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvwmz\" (UniqueName: \"kubernetes.io/projected/e936d231-b45d-4a81-a1e8-1f83e58b1dca-kube-api-access-xvwmz\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.641260 kubelet[2719]: I0909 00:16:48.641261 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e936d231-b45d-4a81-a1e8-1f83e58b1dca-cni-log-dir\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.641608 kubelet[2719]: I0909 00:16:48.641277 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e936d231-b45d-4a81-a1e8-1f83e58b1dca-var-lib-calico\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.641608 kubelet[2719]: I0909 00:16:48.641296 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e936d231-b45d-4a81-a1e8-1f83e58b1dca-flexvol-driver-host\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.641608 kubelet[2719]: I0909 00:16:48.641310 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e936d231-b45d-4a81-a1e8-1f83e58b1dca-node-certs\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.641608 kubelet[2719]: I0909 00:16:48.641325 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e936d231-b45d-4a81-a1e8-1f83e58b1dca-cni-net-dir\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.641608 kubelet[2719]: I0909 00:16:48.641338 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e936d231-b45d-4a81-a1e8-1f83e58b1dca-var-run-calico\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.642330 kubelet[2719]: I0909 00:16:48.641353 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e936d231-b45d-4a81-a1e8-1f83e58b1dca-xtables-lock\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.642330 kubelet[2719]: I0909 00:16:48.641367 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e936d231-b45d-4a81-a1e8-1f83e58b1dca-lib-modules\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.642330 kubelet[2719]: I0909 00:16:48.641383 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e936d231-b45d-4a81-a1e8-1f83e58b1dca-tigera-ca-bundle\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.642330 kubelet[2719]: I0909 00:16:48.641402 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e936d231-b45d-4a81-a1e8-1f83e58b1dca-policysync\") pod \"calico-node-vpmmj\" (UID: \"e936d231-b45d-4a81-a1e8-1f83e58b1dca\") " pod="calico-system/calico-node-vpmmj" Sep 9 00:16:48.642330 kubelet[2719]: E0909 00:16:48.641642 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:48.645078 containerd[1558]: time="2025-09-09T00:16:48.645024012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5447b697b-x7z4q,Uid:cf8b113f-fa0b-4312-b127-a5af51b22642,Namespace:calico-system,Attempt:0,}" Sep 9 00:16:48.703871 containerd[1558]: time="2025-09-09T00:16:48.703810651Z" level=info msg="connecting to shim c6788ebaf6d2f863b1c6e8c3aa8de6b8584db0eeff0b9f27f3f465abcbe3643f" address="unix:///run/containerd/s/1f52271a0aca0df775a3d08dcd77aa50f0bc1445d2cb1557d3a87e6e2c0a1e66" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:16:48.736058 systemd[1]: Started cri-containerd-c6788ebaf6d2f863b1c6e8c3aa8de6b8584db0eeff0b9f27f3f465abcbe3643f.scope - libcontainer container c6788ebaf6d2f863b1c6e8c3aa8de6b8584db0eeff0b9f27f3f465abcbe3643f. Sep 9 00:16:48.744981 kubelet[2719]: E0909 00:16:48.744945 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.744981 kubelet[2719]: W0909 00:16:48.744969 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.745180 kubelet[2719]: E0909 00:16:48.745012 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.745503 kubelet[2719]: E0909 00:16:48.745467 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.745503 kubelet[2719]: W0909 00:16:48.745482 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.745601 kubelet[2719]: E0909 00:16:48.745530 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.745891 kubelet[2719]: E0909 00:16:48.745809 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.745968 kubelet[2719]: W0909 00:16:48.745892 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.745968 kubelet[2719]: E0909 00:16:48.745908 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.746426 kubelet[2719]: E0909 00:16:48.746400 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.746426 kubelet[2719]: W0909 00:16:48.746417 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.746529 kubelet[2719]: E0909 00:16:48.746431 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.746704 kubelet[2719]: E0909 00:16:48.746666 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.746704 kubelet[2719]: W0909 00:16:48.746699 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.746919 kubelet[2719]: E0909 00:16:48.746711 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.756774 kubelet[2719]: E0909 00:16:48.753476 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.756774 kubelet[2719]: W0909 00:16:48.753500 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.756774 kubelet[2719]: E0909 00:16:48.753522 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.760142 kubelet[2719]: E0909 00:16:48.760103 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.760142 kubelet[2719]: W0909 00:16:48.760135 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.760282 kubelet[2719]: E0909 00:16:48.760168 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.773272 kubelet[2719]: E0909 00:16:48.773214 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gwrd" podUID="08e66685-6e52-4173-9bbb-66e69f63a998" Sep 9 00:16:48.807994 containerd[1558]: time="2025-09-09T00:16:48.807941982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5447b697b-x7z4q,Uid:cf8b113f-fa0b-4312-b127-a5af51b22642,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6788ebaf6d2f863b1c6e8c3aa8de6b8584db0eeff0b9f27f3f465abcbe3643f\"" Sep 9 00:16:48.808868 kubelet[2719]: E0909 00:16:48.808829 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:48.810794 containerd[1558]: time="2025-09-09T00:16:48.810562304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:16:48.827429 kubelet[2719]: E0909 00:16:48.827390 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.827429 kubelet[2719]: W0909 00:16:48.827416 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.827618 kubelet[2719]: E0909 00:16:48.827441 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.827618 kubelet[2719]: E0909 00:16:48.827608 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.827618 kubelet[2719]: W0909 00:16:48.827616 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.827749 kubelet[2719]: E0909 00:16:48.827625 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.827810 kubelet[2719]: E0909 00:16:48.827795 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.827810 kubelet[2719]: W0909 00:16:48.827805 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.827875 kubelet[2719]: E0909 00:16:48.827813 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.828035 kubelet[2719]: E0909 00:16:48.828019 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.828035 kubelet[2719]: W0909 00:16:48.828029 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.828105 kubelet[2719]: E0909 00:16:48.828037 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.828210 kubelet[2719]: E0909 00:16:48.828181 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.828210 kubelet[2719]: W0909 00:16:48.828196 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.828210 kubelet[2719]: E0909 00:16:48.828204 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.828344 kubelet[2719]: E0909 00:16:48.828331 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.828344 kubelet[2719]: W0909 00:16:48.828340 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.828409 kubelet[2719]: E0909 00:16:48.828347 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.828489 kubelet[2719]: E0909 00:16:48.828475 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.828489 kubelet[2719]: W0909 00:16:48.828485 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.828544 kubelet[2719]: E0909 00:16:48.828492 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.828631 kubelet[2719]: E0909 00:16:48.828618 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.828631 kubelet[2719]: W0909 00:16:48.828627 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.828714 kubelet[2719]: E0909 00:16:48.828634 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.828803 kubelet[2719]: E0909 00:16:48.828788 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.828803 kubelet[2719]: W0909 00:16:48.828800 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.828871 kubelet[2719]: E0909 00:16:48.828808 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.828951 kubelet[2719]: E0909 00:16:48.828937 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.828951 kubelet[2719]: W0909 00:16:48.828946 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.829014 kubelet[2719]: E0909 00:16:48.828954 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.829100 kubelet[2719]: E0909 00:16:48.829086 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.829100 kubelet[2719]: W0909 00:16:48.829095 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.829161 kubelet[2719]: E0909 00:16:48.829103 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.829244 kubelet[2719]: E0909 00:16:48.829231 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.829244 kubelet[2719]: W0909 00:16:48.829240 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.829304 kubelet[2719]: E0909 00:16:48.829247 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.829395 kubelet[2719]: E0909 00:16:48.829382 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.829395 kubelet[2719]: W0909 00:16:48.829391 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.829458 kubelet[2719]: E0909 00:16:48.829398 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.829547 kubelet[2719]: E0909 00:16:48.829533 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.829547 kubelet[2719]: W0909 00:16:48.829542 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.829609 kubelet[2719]: E0909 00:16:48.829549 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.829703 kubelet[2719]: E0909 00:16:48.829687 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.829703 kubelet[2719]: W0909 00:16:48.829698 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.829816 kubelet[2719]: E0909 00:16:48.829707 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.829931 kubelet[2719]: E0909 00:16:48.829915 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.829931 kubelet[2719]: W0909 00:16:48.829926 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.830012 kubelet[2719]: E0909 00:16:48.829934 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.830099 kubelet[2719]: E0909 00:16:48.830087 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.830099 kubelet[2719]: W0909 00:16:48.830096 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.830147 kubelet[2719]: E0909 00:16:48.830104 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.830252 kubelet[2719]: E0909 00:16:48.830240 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.830252 kubelet[2719]: W0909 00:16:48.830249 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.830295 kubelet[2719]: E0909 00:16:48.830257 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.830405 kubelet[2719]: E0909 00:16:48.830394 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.830405 kubelet[2719]: W0909 00:16:48.830402 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.830448 kubelet[2719]: E0909 00:16:48.830410 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.830556 kubelet[2719]: E0909 00:16:48.830544 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.830556 kubelet[2719]: W0909 00:16:48.830553 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.830614 kubelet[2719]: E0909 00:16:48.830561 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.842938 kubelet[2719]: E0909 00:16:48.842912 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.842938 kubelet[2719]: W0909 00:16:48.842926 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.842938 kubelet[2719]: E0909 00:16:48.842937 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.843073 kubelet[2719]: I0909 00:16:48.842962 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/08e66685-6e52-4173-9bbb-66e69f63a998-kubelet-dir\") pod \"csi-node-driver-6gwrd\" (UID: \"08e66685-6e52-4173-9bbb-66e69f63a998\") " pod="calico-system/csi-node-driver-6gwrd" Sep 9 00:16:48.843175 kubelet[2719]: E0909 00:16:48.843153 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.843175 kubelet[2719]: W0909 00:16:48.843169 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.843252 kubelet[2719]: E0909 00:16:48.843189 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.843252 kubelet[2719]: I0909 00:16:48.843212 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/08e66685-6e52-4173-9bbb-66e69f63a998-socket-dir\") pod \"csi-node-driver-6gwrd\" (UID: \"08e66685-6e52-4173-9bbb-66e69f63a998\") " pod="calico-system/csi-node-driver-6gwrd" Sep 9 00:16:48.843453 kubelet[2719]: E0909 00:16:48.843427 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.843453 kubelet[2719]: W0909 00:16:48.843439 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.843453 kubelet[2719]: E0909 00:16:48.843451 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.843541 kubelet[2719]: I0909 00:16:48.843466 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/08e66685-6e52-4173-9bbb-66e69f63a998-varrun\") pod \"csi-node-driver-6gwrd\" (UID: \"08e66685-6e52-4173-9bbb-66e69f63a998\") " pod="calico-system/csi-node-driver-6gwrd" Sep 9 00:16:48.843658 kubelet[2719]: E0909 00:16:48.843644 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.843658 kubelet[2719]: W0909 00:16:48.843654 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.843870 kubelet[2719]: E0909 00:16:48.843666 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.843870 kubelet[2719]: I0909 00:16:48.843688 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2hfp\" (UniqueName: \"kubernetes.io/projected/08e66685-6e52-4173-9bbb-66e69f63a998-kube-api-access-x2hfp\") pod \"csi-node-driver-6gwrd\" (UID: \"08e66685-6e52-4173-9bbb-66e69f63a998\") " pod="calico-system/csi-node-driver-6gwrd" Sep 9 00:16:48.843870 kubelet[2719]: E0909 00:16:48.843850 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.843870 kubelet[2719]: W0909 00:16:48.843858 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.843997 kubelet[2719]: E0909 00:16:48.843870 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.843997 kubelet[2719]: I0909 00:16:48.843919 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/08e66685-6e52-4173-9bbb-66e69f63a998-registration-dir\") pod \"csi-node-driver-6gwrd\" (UID: \"08e66685-6e52-4173-9bbb-66e69f63a998\") " pod="calico-system/csi-node-driver-6gwrd" Sep 9 00:16:48.844120 kubelet[2719]: E0909 00:16:48.844103 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.844120 kubelet[2719]: W0909 00:16:48.844113 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.844187 kubelet[2719]: E0909 00:16:48.844125 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.844291 kubelet[2719]: E0909 00:16:48.844276 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.844291 kubelet[2719]: W0909 00:16:48.844284 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.844364 kubelet[2719]: E0909 00:16:48.844312 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.844452 kubelet[2719]: E0909 00:16:48.844436 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.844452 kubelet[2719]: W0909 00:16:48.844445 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.844529 kubelet[2719]: E0909 00:16:48.844470 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.844608 kubelet[2719]: E0909 00:16:48.844593 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.844608 kubelet[2719]: W0909 00:16:48.844601 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.844691 kubelet[2719]: E0909 00:16:48.844626 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.844797 kubelet[2719]: E0909 00:16:48.844782 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.844797 kubelet[2719]: W0909 00:16:48.844792 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.844873 kubelet[2719]: E0909 00:16:48.844813 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.844959 kubelet[2719]: E0909 00:16:48.844944 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.844959 kubelet[2719]: W0909 00:16:48.844953 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.845032 kubelet[2719]: E0909 00:16:48.844965 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.845115 kubelet[2719]: E0909 00:16:48.845101 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.845115 kubelet[2719]: W0909 00:16:48.845110 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.845172 kubelet[2719]: E0909 00:16:48.845117 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.845275 kubelet[2719]: E0909 00:16:48.845260 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.845275 kubelet[2719]: W0909 00:16:48.845269 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.845344 kubelet[2719]: E0909 00:16:48.845277 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.845434 kubelet[2719]: E0909 00:16:48.845419 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.845434 kubelet[2719]: W0909 00:16:48.845428 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.845491 kubelet[2719]: E0909 00:16:48.845435 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.845594 kubelet[2719]: E0909 00:16:48.845580 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.845594 kubelet[2719]: W0909 00:16:48.845588 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.845594 kubelet[2719]: E0909 00:16:48.845595 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.926260 containerd[1558]: time="2025-09-09T00:16:48.926202357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vpmmj,Uid:e936d231-b45d-4a81-a1e8-1f83e58b1dca,Namespace:calico-system,Attempt:0,}" Sep 9 00:16:48.945346 kubelet[2719]: E0909 00:16:48.945305 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.945346 kubelet[2719]: W0909 00:16:48.945328 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.945346 kubelet[2719]: E0909 00:16:48.945350 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.946069 kubelet[2719]: E0909 00:16:48.945848 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.946069 kubelet[2719]: W0909 00:16:48.945859 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.946069 kubelet[2719]: E0909 00:16:48.945875 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.947470 kubelet[2719]: E0909 00:16:48.946320 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.947470 kubelet[2719]: W0909 00:16:48.946333 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.947470 kubelet[2719]: E0909 00:16:48.946347 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.947470 kubelet[2719]: E0909 00:16:48.946882 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.947470 kubelet[2719]: W0909 00:16:48.946892 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.947470 kubelet[2719]: E0909 00:16:48.947113 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.947833 kubelet[2719]: E0909 00:16:48.947800 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.947833 kubelet[2719]: W0909 00:16:48.947815 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.947833 kubelet[2719]: E0909 00:16:48.947829 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.948008 kubelet[2719]: E0909 00:16:48.947992 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.948008 kubelet[2719]: W0909 00:16:48.948004 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.948086 kubelet[2719]: E0909 00:16:48.948035 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.948166 kubelet[2719]: E0909 00:16:48.948149 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.948166 kubelet[2719]: W0909 00:16:48.948160 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.948236 kubelet[2719]: E0909 00:16:48.948183 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.948315 kubelet[2719]: E0909 00:16:48.948298 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.948315 kubelet[2719]: W0909 00:16:48.948309 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.948392 kubelet[2719]: E0909 00:16:48.948329 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.948452 kubelet[2719]: E0909 00:16:48.948436 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.948452 kubelet[2719]: W0909 00:16:48.948446 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.948526 kubelet[2719]: E0909 00:16:48.948459 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.948709 kubelet[2719]: E0909 00:16:48.948684 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.948785 kubelet[2719]: W0909 00:16:48.948710 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.948834 kubelet[2719]: E0909 00:16:48.948792 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.949072 kubelet[2719]: E0909 00:16:48.949046 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.949072 kubelet[2719]: W0909 00:16:48.949064 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.949248 kubelet[2719]: E0909 00:16:48.949222 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.949339 kubelet[2719]: E0909 00:16:48.949320 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.949339 kubelet[2719]: W0909 00:16:48.949334 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.949431 kubelet[2719]: E0909 00:16:48.949421 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.949618 kubelet[2719]: E0909 00:16:48.949587 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.949618 kubelet[2719]: W0909 00:16:48.949600 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.949716 kubelet[2719]: E0909 00:16:48.949657 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.949877 kubelet[2719]: E0909 00:16:48.949860 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.949877 kubelet[2719]: W0909 00:16:48.949873 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.949982 kubelet[2719]: E0909 00:16:48.949955 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.950134 kubelet[2719]: E0909 00:16:48.950116 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.950134 kubelet[2719]: W0909 00:16:48.950130 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.950271 kubelet[2719]: E0909 00:16:48.950252 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.950360 kubelet[2719]: E0909 00:16:48.950344 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.950360 kubelet[2719]: W0909 00:16:48.950355 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.950497 kubelet[2719]: E0909 00:16:48.950374 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.950795 kubelet[2719]: E0909 00:16:48.950705 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.950795 kubelet[2719]: W0909 00:16:48.950722 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.951133 kubelet[2719]: E0909 00:16:48.951098 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.951487 kubelet[2719]: E0909 00:16:48.951364 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.951487 kubelet[2719]: W0909 00:16:48.951377 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.951487 kubelet[2719]: E0909 00:16:48.951423 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.952835 kubelet[2719]: E0909 00:16:48.952814 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.952835 kubelet[2719]: W0909 00:16:48.952829 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.952946 kubelet[2719]: E0909 00:16:48.952924 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.953407 kubelet[2719]: E0909 00:16:48.953387 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.953407 kubelet[2719]: W0909 00:16:48.953404 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.953497 kubelet[2719]: E0909 00:16:48.953481 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.954456 kubelet[2719]: E0909 00:16:48.953641 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.954456 kubelet[2719]: W0909 00:16:48.953656 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.954456 kubelet[2719]: E0909 00:16:48.953695 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.954456 kubelet[2719]: E0909 00:16:48.953893 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.954456 kubelet[2719]: W0909 00:16:48.953904 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.954456 kubelet[2719]: E0909 00:16:48.953925 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.954456 kubelet[2719]: E0909 00:16:48.954247 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.954456 kubelet[2719]: W0909 00:16:48.954259 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.954456 kubelet[2719]: E0909 00:16:48.954277 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.955497 kubelet[2719]: E0909 00:16:48.954851 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.955497 kubelet[2719]: W0909 00:16:48.954865 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.955497 kubelet[2719]: E0909 00:16:48.954878 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.955497 kubelet[2719]: E0909 00:16:48.955333 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.955497 kubelet[2719]: W0909 00:16:48.955343 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.955497 kubelet[2719]: E0909 00:16:48.955354 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.959810 containerd[1558]: time="2025-09-09T00:16:48.959467489Z" level=info msg="connecting to shim 366b6f0f3e7aa8f4111bea2bb200114bf5c86530bc024c5132c61f95e825ca5a" address="unix:///run/containerd/s/0a9455bb17719ad7e2335374d629fa3ba2031603b95589083176beaae8e521f9" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:16:48.965703 kubelet[2719]: E0909 00:16:48.965586 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:48.965703 kubelet[2719]: W0909 00:16:48.965613 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:48.965703 kubelet[2719]: E0909 00:16:48.965639 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:48.987968 systemd[1]: Started cri-containerd-366b6f0f3e7aa8f4111bea2bb200114bf5c86530bc024c5132c61f95e825ca5a.scope - libcontainer container 366b6f0f3e7aa8f4111bea2bb200114bf5c86530bc024c5132c61f95e825ca5a. Sep 9 00:16:49.021421 containerd[1558]: time="2025-09-09T00:16:49.021378067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vpmmj,Uid:e936d231-b45d-4a81-a1e8-1f83e58b1dca,Namespace:calico-system,Attempt:0,} returns sandbox id \"366b6f0f3e7aa8f4111bea2bb200114bf5c86530bc024c5132c61f95e825ca5a\"" Sep 9 00:16:50.333226 kubelet[2719]: E0909 00:16:50.333172 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gwrd" podUID="08e66685-6e52-4173-9bbb-66e69f63a998" Sep 9 00:16:50.350823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913697824.mount: Deactivated successfully. Sep 9 00:16:51.354327 containerd[1558]: time="2025-09-09T00:16:51.354249404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:51.355183 containerd[1558]: time="2025-09-09T00:16:51.355153628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 00:16:51.356281 containerd[1558]: time="2025-09-09T00:16:51.356249613Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:51.360906 containerd[1558]: time="2025-09-09T00:16:51.360855941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:51.361423 containerd[1558]: time="2025-09-09T00:16:51.361387272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.550790904s" Sep 9 00:16:51.361423 containerd[1558]: time="2025-09-09T00:16:51.361415415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 00:16:51.362559 containerd[1558]: time="2025-09-09T00:16:51.362527330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:16:51.372655 containerd[1558]: time="2025-09-09T00:16:51.372604715Z" level=info msg="CreateContainer within sandbox \"c6788ebaf6d2f863b1c6e8c3aa8de6b8584db0eeff0b9f27f3f465abcbe3643f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:16:51.388979 containerd[1558]: time="2025-09-09T00:16:51.388824242Z" level=info msg="Container f68bfaaa56b4ae725fd3374d24d8514d23c9882f7c04ad3d00497addb748a969: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:16:51.407152 containerd[1558]: time="2025-09-09T00:16:51.407103650Z" level=info msg="CreateContainer within sandbox \"c6788ebaf6d2f863b1c6e8c3aa8de6b8584db0eeff0b9f27f3f465abcbe3643f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f68bfaaa56b4ae725fd3374d24d8514d23c9882f7c04ad3d00497addb748a969\"" Sep 9 00:16:51.407576 containerd[1558]: time="2025-09-09T00:16:51.407537046Z" level=info msg="StartContainer for \"f68bfaaa56b4ae725fd3374d24d8514d23c9882f7c04ad3d00497addb748a969\"" Sep 9 00:16:51.408592 containerd[1558]: time="2025-09-09T00:16:51.408563350Z" level=info msg="connecting to shim f68bfaaa56b4ae725fd3374d24d8514d23c9882f7c04ad3d00497addb748a969" address="unix:///run/containerd/s/1f52271a0aca0df775a3d08dcd77aa50f0bc1445d2cb1557d3a87e6e2c0a1e66" protocol=ttrpc version=3 Sep 9 00:16:51.437895 systemd[1]: Started cri-containerd-f68bfaaa56b4ae725fd3374d24d8514d23c9882f7c04ad3d00497addb748a969.scope - libcontainer container f68bfaaa56b4ae725fd3374d24d8514d23c9882f7c04ad3d00497addb748a969. Sep 9 00:16:51.486121 containerd[1558]: time="2025-09-09T00:16:51.486075490Z" level=info msg="StartContainer for \"f68bfaaa56b4ae725fd3374d24d8514d23c9882f7c04ad3d00497addb748a969\" returns successfully" Sep 9 00:16:52.333898 kubelet[2719]: E0909 00:16:52.333830 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gwrd" podUID="08e66685-6e52-4173-9bbb-66e69f63a998" Sep 9 00:16:52.392209 kubelet[2719]: E0909 00:16:52.392161 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:52.401927 kubelet[2719]: I0909 00:16:52.401846 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5447b697b-x7z4q" podStartSLOduration=1.849885142 podStartE2EDuration="4.401827727s" podCreationTimestamp="2025-09-09 00:16:48 +0000 UTC" firstStartedPulling="2025-09-09 00:16:48.810162369 +0000 UTC m=+19.613146480" lastFinishedPulling="2025-09-09 00:16:51.362104954 +0000 UTC m=+22.165089065" observedRunningTime="2025-09-09 00:16:52.401430829 +0000 UTC m=+23.204414951" watchObservedRunningTime="2025-09-09 00:16:52.401827727 +0000 UTC m=+23.204811828" Sep 9 00:16:52.453858 kubelet[2719]: E0909 00:16:52.453810 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.453858 kubelet[2719]: W0909 00:16:52.453841 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.453858 kubelet[2719]: E0909 00:16:52.453870 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.454126 kubelet[2719]: E0909 00:16:52.454117 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.454150 kubelet[2719]: W0909 00:16:52.454126 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.454150 kubelet[2719]: E0909 00:16:52.454135 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.454353 kubelet[2719]: E0909 00:16:52.454327 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.454353 kubelet[2719]: W0909 00:16:52.454338 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.454353 kubelet[2719]: E0909 00:16:52.454348 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.454683 kubelet[2719]: E0909 00:16:52.454648 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.454683 kubelet[2719]: W0909 00:16:52.454663 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.454683 kubelet[2719]: E0909 00:16:52.454673 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.454928 kubelet[2719]: E0909 00:16:52.454893 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.454928 kubelet[2719]: W0909 00:16:52.454903 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.454928 kubelet[2719]: E0909 00:16:52.454913 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.455098 kubelet[2719]: E0909 00:16:52.455081 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.455098 kubelet[2719]: W0909 00:16:52.455093 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.455145 kubelet[2719]: E0909 00:16:52.455103 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.455287 kubelet[2719]: E0909 00:16:52.455270 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.455287 kubelet[2719]: W0909 00:16:52.455282 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.455340 kubelet[2719]: E0909 00:16:52.455292 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.455498 kubelet[2719]: E0909 00:16:52.455481 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.455498 kubelet[2719]: W0909 00:16:52.455495 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.455539 kubelet[2719]: E0909 00:16:52.455507 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.455720 kubelet[2719]: E0909 00:16:52.455702 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.455720 kubelet[2719]: W0909 00:16:52.455715 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.455866 kubelet[2719]: E0909 00:16:52.455727 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.455949 kubelet[2719]: E0909 00:16:52.455932 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.455949 kubelet[2719]: W0909 00:16:52.455946 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.455995 kubelet[2719]: E0909 00:16:52.455956 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.456140 kubelet[2719]: E0909 00:16:52.456123 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.456140 kubelet[2719]: W0909 00:16:52.456137 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.456190 kubelet[2719]: E0909 00:16:52.456147 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.456333 kubelet[2719]: E0909 00:16:52.456317 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.456333 kubelet[2719]: W0909 00:16:52.456330 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.456381 kubelet[2719]: E0909 00:16:52.456340 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.456532 kubelet[2719]: E0909 00:16:52.456516 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.456532 kubelet[2719]: W0909 00:16:52.456530 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.456583 kubelet[2719]: E0909 00:16:52.456540 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.456822 kubelet[2719]: E0909 00:16:52.456805 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.456822 kubelet[2719]: W0909 00:16:52.456818 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.456872 kubelet[2719]: E0909 00:16:52.456830 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.457011 kubelet[2719]: E0909 00:16:52.456995 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.457011 kubelet[2719]: W0909 00:16:52.457007 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.457060 kubelet[2719]: E0909 00:16:52.457016 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.472582 kubelet[2719]: E0909 00:16:52.472547 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.472582 kubelet[2719]: W0909 00:16:52.472573 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.472793 kubelet[2719]: E0909 00:16:52.472607 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.472918 kubelet[2719]: E0909 00:16:52.472900 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.472918 kubelet[2719]: W0909 00:16:52.472915 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.472982 kubelet[2719]: E0909 00:16:52.472935 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.473281 kubelet[2719]: E0909 00:16:52.473252 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.473312 kubelet[2719]: W0909 00:16:52.473282 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.473338 kubelet[2719]: E0909 00:16:52.473321 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.473559 kubelet[2719]: E0909 00:16:52.473541 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.473610 kubelet[2719]: W0909 00:16:52.473558 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.473610 kubelet[2719]: E0909 00:16:52.473581 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.473853 kubelet[2719]: E0909 00:16:52.473839 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.473853 kubelet[2719]: W0909 00:16:52.473851 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.473914 kubelet[2719]: E0909 00:16:52.473869 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.474121 kubelet[2719]: E0909 00:16:52.474098 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.474121 kubelet[2719]: W0909 00:16:52.474109 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.474179 kubelet[2719]: E0909 00:16:52.474125 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.474295 kubelet[2719]: E0909 00:16:52.474282 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.474327 kubelet[2719]: W0909 00:16:52.474292 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.474327 kubelet[2719]: E0909 00:16:52.474312 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.474563 kubelet[2719]: E0909 00:16:52.474534 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.474563 kubelet[2719]: W0909 00:16:52.474554 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.474629 kubelet[2719]: E0909 00:16:52.474573 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.474847 kubelet[2719]: E0909 00:16:52.474831 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.474847 kubelet[2719]: W0909 00:16:52.474844 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.474898 kubelet[2719]: E0909 00:16:52.474880 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.475077 kubelet[2719]: E0909 00:16:52.475062 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.475077 kubelet[2719]: W0909 00:16:52.475073 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.475126 kubelet[2719]: E0909 00:16:52.475098 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.475287 kubelet[2719]: E0909 00:16:52.475271 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.475287 kubelet[2719]: W0909 00:16:52.475284 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.475335 kubelet[2719]: E0909 00:16:52.475300 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.475669 kubelet[2719]: E0909 00:16:52.475649 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.475702 kubelet[2719]: W0909 00:16:52.475665 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.475702 kubelet[2719]: E0909 00:16:52.475695 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.475944 kubelet[2719]: E0909 00:16:52.475927 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.475944 kubelet[2719]: W0909 00:16:52.475941 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.475992 kubelet[2719]: E0909 00:16:52.475956 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.476153 kubelet[2719]: E0909 00:16:52.476138 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.476153 kubelet[2719]: W0909 00:16:52.476150 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.476209 kubelet[2719]: E0909 00:16:52.476164 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.476366 kubelet[2719]: E0909 00:16:52.476351 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.476391 kubelet[2719]: W0909 00:16:52.476364 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.476391 kubelet[2719]: E0909 00:16:52.476380 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.476624 kubelet[2719]: E0909 00:16:52.476606 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.476624 kubelet[2719]: W0909 00:16:52.476622 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.476672 kubelet[2719]: E0909 00:16:52.476651 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.476940 kubelet[2719]: E0909 00:16:52.476922 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.476940 kubelet[2719]: W0909 00:16:52.476936 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.476991 kubelet[2719]: E0909 00:16:52.476948 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.477553 kubelet[2719]: E0909 00:16:52.477528 2719 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:16:52.477553 kubelet[2719]: W0909 00:16:52.477543 2719 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:16:52.477624 kubelet[2719]: E0909 00:16:52.477554 2719 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:16:52.864374 containerd[1558]: time="2025-09-09T00:16:52.864322981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:52.865805 containerd[1558]: time="2025-09-09T00:16:52.865766260Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 00:16:52.867039 containerd[1558]: time="2025-09-09T00:16:52.866986248Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:52.869045 containerd[1558]: time="2025-09-09T00:16:52.869010731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:52.869541 containerd[1558]: time="2025-09-09T00:16:52.869502156Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.506940342s" Sep 9 00:16:52.869541 containerd[1558]: time="2025-09-09T00:16:52.869546399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:16:52.872711 containerd[1558]: time="2025-09-09T00:16:52.872669502Z" level=info msg="CreateContainer within sandbox \"366b6f0f3e7aa8f4111bea2bb200114bf5c86530bc024c5132c61f95e825ca5a\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:16:52.886382 containerd[1558]: time="2025-09-09T00:16:52.886320843Z" level=info msg="Container 0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:16:52.898067 containerd[1558]: time="2025-09-09T00:16:52.898023274Z" level=info msg="CreateContainer within sandbox \"366b6f0f3e7aa8f4111bea2bb200114bf5c86530bc024c5132c61f95e825ca5a\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519\"" Sep 9 00:16:52.898689 containerd[1558]: time="2025-09-09T00:16:52.898612543Z" level=info msg="StartContainer for \"0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519\"" Sep 9 00:16:52.900441 containerd[1558]: time="2025-09-09T00:16:52.900403667Z" level=info msg="connecting to shim 0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519" address="unix:///run/containerd/s/0a9455bb17719ad7e2335374d629fa3ba2031603b95589083176beaae8e521f9" protocol=ttrpc version=3 Sep 9 00:16:52.932870 systemd[1]: Started cri-containerd-0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519.scope - libcontainer container 0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519. Sep 9 00:16:52.992840 systemd[1]: cri-containerd-0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519.scope: Deactivated successfully. Sep 9 00:16:52.995607 containerd[1558]: time="2025-09-09T00:16:52.995544588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519\" id:\"0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519\" pid:3423 exited_at:{seconds:1757377012 nanos:994994792}" Sep 9 00:16:53.220776 containerd[1558]: time="2025-09-09T00:16:53.220561810Z" level=info msg="received exit event container_id:\"0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519\" id:\"0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519\" pid:3423 exited_at:{seconds:1757377012 nanos:994994792}" Sep 9 00:16:53.222561 containerd[1558]: time="2025-09-09T00:16:53.222539033Z" level=info msg="StartContainer for \"0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519\" returns successfully" Sep 9 00:16:53.246685 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c4fd0bd17793f8bc64e46644c1bd2636841165f8a8e2160cc09c5064b738519-rootfs.mount: Deactivated successfully. Sep 9 00:16:53.396168 kubelet[2719]: I0909 00:16:53.396117 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:16:53.397302 kubelet[2719]: E0909 00:16:53.397277 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:16:54.333083 kubelet[2719]: E0909 00:16:54.333016 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gwrd" podUID="08e66685-6e52-4173-9bbb-66e69f63a998" Sep 9 00:16:54.400876 containerd[1558]: time="2025-09-09T00:16:54.400798042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:16:56.333588 kubelet[2719]: E0909 00:16:56.333529 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gwrd" podUID="08e66685-6e52-4173-9bbb-66e69f63a998" Sep 9 00:16:58.129820 containerd[1558]: time="2025-09-09T00:16:58.129754700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:58.130450 containerd[1558]: time="2025-09-09T00:16:58.130413850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:16:58.135272 containerd[1558]: time="2025-09-09T00:16:58.135212596Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:58.137354 containerd[1558]: time="2025-09-09T00:16:58.137316372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:16:58.138000 containerd[1558]: time="2025-09-09T00:16:58.137973619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.737133938s" Sep 9 00:16:58.138000 containerd[1558]: time="2025-09-09T00:16:58.138000249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:16:58.140364 containerd[1558]: time="2025-09-09T00:16:58.140321725Z" level=info msg="CreateContainer within sandbox \"366b6f0f3e7aa8f4111bea2bb200114bf5c86530bc024c5132c61f95e825ca5a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:16:58.153273 containerd[1558]: time="2025-09-09T00:16:58.153235734Z" level=info msg="Container 0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:16:58.164618 containerd[1558]: time="2025-09-09T00:16:58.164579239Z" level=info msg="CreateContainer within sandbox \"366b6f0f3e7aa8f4111bea2bb200114bf5c86530bc024c5132c61f95e825ca5a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958\"" Sep 9 00:16:58.165088 containerd[1558]: time="2025-09-09T00:16:58.165043733Z" level=info msg="StartContainer for \"0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958\"" Sep 9 00:16:58.167027 containerd[1558]: time="2025-09-09T00:16:58.166996405Z" level=info msg="connecting to shim 0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958" address="unix:///run/containerd/s/0a9455bb17719ad7e2335374d629fa3ba2031603b95589083176beaae8e521f9" protocol=ttrpc version=3 Sep 9 00:16:58.189903 systemd[1]: Started cri-containerd-0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958.scope - libcontainer container 0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958. Sep 9 00:16:58.244565 containerd[1558]: time="2025-09-09T00:16:58.244515977Z" level=info msg="StartContainer for \"0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958\" returns successfully" Sep 9 00:16:58.333389 kubelet[2719]: E0909 00:16:58.333321 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6gwrd" podUID="08e66685-6e52-4173-9bbb-66e69f63a998" Sep 9 00:16:59.529917 systemd[1]: cri-containerd-0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958.scope: Deactivated successfully. Sep 9 00:16:59.530488 systemd[1]: cri-containerd-0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958.scope: Consumed 649ms CPU time, 178.4M memory peak, 3.2M read from disk, 171.3M written to disk. Sep 9 00:16:59.530803 containerd[1558]: time="2025-09-09T00:16:59.530485460Z" level=info msg="received exit event container_id:\"0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958\" id:\"0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958\" pid:3482 exited_at:{seconds:1757377019 nanos:530215223}" Sep 9 00:16:59.530803 containerd[1558]: time="2025-09-09T00:16:59.530728458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958\" id:\"0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958\" pid:3482 exited_at:{seconds:1757377019 nanos:530215223}" Sep 9 00:16:59.559094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0aff9a0cf1a476a729d76ccc59bc9e979931353873b1c7804d46afe6f1529958-rootfs.mount: Deactivated successfully. Sep 9 00:16:59.594248 kubelet[2719]: I0909 00:16:59.594222 2719 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:16:59.724151 systemd[1]: Created slice kubepods-burstable-pod28474118_60c7_451d_8c91_1c7c0d29a234.slice - libcontainer container kubepods-burstable-pod28474118_60c7_451d_8c91_1c7c0d29a234.slice. Sep 9 00:16:59.731152 systemd[1]: Created slice kubepods-besteffort-podeb7e7c20_7975_493d_9afe_960b50789f47.slice - libcontainer container kubepods-besteffort-podeb7e7c20_7975_493d_9afe_960b50789f47.slice. Sep 9 00:16:59.738608 systemd[1]: Created slice kubepods-burstable-pod4e904ceb_5b62_41f7_9cc4_30a41a8aca37.slice - libcontainer container kubepods-burstable-pod4e904ceb_5b62_41f7_9cc4_30a41a8aca37.slice. Sep 9 00:16:59.745356 systemd[1]: Created slice kubepods-besteffort-pode6cfb95f_994f_481e_9a63_7060331a5492.slice - libcontainer container kubepods-besteffort-pode6cfb95f_994f_481e_9a63_7060331a5492.slice. Sep 9 00:16:59.750716 systemd[1]: Created slice kubepods-besteffort-poda796ff90_d29f_4851_a7cb_f79a9c1eeee0.slice - libcontainer container kubepods-besteffort-poda796ff90_d29f_4851_a7cb_f79a9c1eeee0.slice. Sep 9 00:16:59.757573 systemd[1]: Created slice kubepods-besteffort-podb0f2d823_5bfa_460a_ba27_2dfc2d7f821e.slice - libcontainer container kubepods-besteffort-podb0f2d823_5bfa_460a_ba27_2dfc2d7f821e.slice. Sep 9 00:16:59.763350 systemd[1]: Created slice kubepods-besteffort-pod918ad19a_fd04_472b_9afa_e50f5529c7be.slice - libcontainer container kubepods-besteffort-pod918ad19a_fd04_472b_9afa_e50f5529c7be.slice. Sep 9 00:16:59.824366 kubelet[2719]: I0909 00:16:59.824207 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/28474118-60c7-451d-8c91-1c7c0d29a234-config-volume\") pod \"coredns-668d6bf9bc-srhmj\" (UID: \"28474118-60c7-451d-8c91-1c7c0d29a234\") " pod="kube-system/coredns-668d6bf9bc-srhmj" Sep 9 00:16:59.824366 kubelet[2719]: I0909 00:16:59.824256 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e904ceb-5b62-41f7-9cc4-30a41a8aca37-config-volume\") pod \"coredns-668d6bf9bc-4bqb6\" (UID: \"4e904ceb-5b62-41f7-9cc4-30a41a8aca37\") " pod="kube-system/coredns-668d6bf9bc-4bqb6" Sep 9 00:16:59.824366 kubelet[2719]: I0909 00:16:59.824273 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6cfb95f-994f-481e-9a63-7060331a5492-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-jnwpv\" (UID: \"e6cfb95f-994f-481e-9a63-7060331a5492\") " pod="calico-system/goldmane-54d579b49d-jnwpv" Sep 9 00:16:59.824366 kubelet[2719]: I0909 00:16:59.824302 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e6cfb95f-994f-481e-9a63-7060331a5492-goldmane-key-pair\") pod \"goldmane-54d579b49d-jnwpv\" (UID: \"e6cfb95f-994f-481e-9a63-7060331a5492\") " pod="calico-system/goldmane-54d579b49d-jnwpv" Sep 9 00:16:59.824366 kubelet[2719]: I0909 00:16:59.824321 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h4wg\" (UniqueName: \"kubernetes.io/projected/918ad19a-fd04-472b-9afa-e50f5529c7be-kube-api-access-7h4wg\") pod \"whisker-698c5d8dcf-cmdld\" (UID: \"918ad19a-fd04-472b-9afa-e50f5529c7be\") " pod="calico-system/whisker-698c5d8dcf-cmdld" Sep 9 00:16:59.824644 kubelet[2719]: I0909 00:16:59.824442 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6cfb95f-994f-481e-9a63-7060331a5492-config\") pod \"goldmane-54d579b49d-jnwpv\" (UID: \"e6cfb95f-994f-481e-9a63-7060331a5492\") " pod="calico-system/goldmane-54d579b49d-jnwpv" Sep 9 00:16:59.824644 kubelet[2719]: I0909 00:16:59.824488 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgtq7\" (UniqueName: \"kubernetes.io/projected/e6cfb95f-994f-481e-9a63-7060331a5492-kube-api-access-mgtq7\") pod \"goldmane-54d579b49d-jnwpv\" (UID: \"e6cfb95f-994f-481e-9a63-7060331a5492\") " pod="calico-system/goldmane-54d579b49d-jnwpv" Sep 9 00:16:59.824644 kubelet[2719]: I0909 00:16:59.824517 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0f2d823-5bfa-460a-ba27-2dfc2d7f821e-tigera-ca-bundle\") pod \"calico-kube-controllers-6468c59944-bskfc\" (UID: \"b0f2d823-5bfa-460a-ba27-2dfc2d7f821e\") " pod="calico-system/calico-kube-controllers-6468c59944-bskfc" Sep 9 00:16:59.824644 kubelet[2719]: I0909 00:16:59.824534 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a796ff90-d29f-4851-a7cb-f79a9c1eeee0-calico-apiserver-certs\") pod \"calico-apiserver-67c4946788-5882s\" (UID: \"a796ff90-d29f-4851-a7cb-f79a9c1eeee0\") " pod="calico-apiserver/calico-apiserver-67c4946788-5882s" Sep 9 00:16:59.824644 kubelet[2719]: I0909 00:16:59.824550 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkl7v\" (UniqueName: \"kubernetes.io/projected/28474118-60c7-451d-8c91-1c7c0d29a234-kube-api-access-mkl7v\") pod \"coredns-668d6bf9bc-srhmj\" (UID: \"28474118-60c7-451d-8c91-1c7c0d29a234\") " pod="kube-system/coredns-668d6bf9bc-srhmj" Sep 9 00:16:59.824801 kubelet[2719]: I0909 00:16:59.824577 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/918ad19a-fd04-472b-9afa-e50f5529c7be-whisker-backend-key-pair\") pod \"whisker-698c5d8dcf-cmdld\" (UID: \"918ad19a-fd04-472b-9afa-e50f5529c7be\") " pod="calico-system/whisker-698c5d8dcf-cmdld" Sep 9 00:16:59.824801 kubelet[2719]: I0909 00:16:59.824595 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzts9\" (UniqueName: \"kubernetes.io/projected/eb7e7c20-7975-493d-9afe-960b50789f47-kube-api-access-xzts9\") pod \"calico-apiserver-67c4946788-2b724\" (UID: \"eb7e7c20-7975-493d-9afe-960b50789f47\") " pod="calico-apiserver/calico-apiserver-67c4946788-2b724" Sep 9 00:16:59.824801 kubelet[2719]: I0909 00:16:59.824615 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jk584\" (UniqueName: \"kubernetes.io/projected/b0f2d823-5bfa-460a-ba27-2dfc2d7f821e-kube-api-access-jk584\") pod \"calico-kube-controllers-6468c59944-bskfc\" (UID: \"b0f2d823-5bfa-460a-ba27-2dfc2d7f821e\") " pod="calico-system/calico-kube-controllers-6468c59944-bskfc" Sep 9 00:16:59.824801 kubelet[2719]: I0909 00:16:59.824632 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmwt\" (UniqueName: \"kubernetes.io/projected/4e904ceb-5b62-41f7-9cc4-30a41a8aca37-kube-api-access-wzmwt\") pod \"coredns-668d6bf9bc-4bqb6\" (UID: \"4e904ceb-5b62-41f7-9cc4-30a41a8aca37\") " pod="kube-system/coredns-668d6bf9bc-4bqb6" Sep 9 00:16:59.824801 kubelet[2719]: I0909 00:16:59.824647 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eb7e7c20-7975-493d-9afe-960b50789f47-calico-apiserver-certs\") pod \"calico-apiserver-67c4946788-2b724\" (UID: \"eb7e7c20-7975-493d-9afe-960b50789f47\") " pod="calico-apiserver/calico-apiserver-67c4946788-2b724" Sep 9 00:16:59.824944 kubelet[2719]: I0909 00:16:59.824688 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4f46\" (UniqueName: \"kubernetes.io/projected/a796ff90-d29f-4851-a7cb-f79a9c1eeee0-kube-api-access-k4f46\") pod \"calico-apiserver-67c4946788-5882s\" (UID: \"a796ff90-d29f-4851-a7cb-f79a9c1eeee0\") " pod="calico-apiserver/calico-apiserver-67c4946788-5882s" Sep 9 00:16:59.824944 kubelet[2719]: I0909 00:16:59.824708 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/918ad19a-fd04-472b-9afa-e50f5529c7be-whisker-ca-bundle\") pod \"whisker-698c5d8dcf-cmdld\" (UID: \"918ad19a-fd04-472b-9afa-e50f5529c7be\") " pod="calico-system/whisker-698c5d8dcf-cmdld" Sep 9 00:17:00.028616 kubelet[2719]: E0909 00:17:00.028582 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:00.029174 containerd[1558]: time="2025-09-09T00:17:00.029132404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-srhmj,Uid:28474118-60c7-451d-8c91-1c7c0d29a234,Namespace:kube-system,Attempt:0,}" Sep 9 00:17:00.036298 containerd[1558]: time="2025-09-09T00:17:00.036261988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4946788-2b724,Uid:eb7e7c20-7975-493d-9afe-960b50789f47,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:17:00.043780 kubelet[2719]: E0909 00:17:00.043745 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:00.045439 containerd[1558]: time="2025-09-09T00:17:00.045059008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bqb6,Uid:4e904ceb-5b62-41f7-9cc4-30a41a8aca37,Namespace:kube-system,Attempt:0,}" Sep 9 00:17:00.050076 containerd[1558]: time="2025-09-09T00:17:00.049574257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jnwpv,Uid:e6cfb95f-994f-481e-9a63-7060331a5492,Namespace:calico-system,Attempt:0,}" Sep 9 00:17:00.056949 containerd[1558]: time="2025-09-09T00:17:00.055959883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4946788-5882s,Uid:a796ff90-d29f-4851-a7cb-f79a9c1eeee0,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:17:00.063569 containerd[1558]: time="2025-09-09T00:17:00.063533223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6468c59944-bskfc,Uid:b0f2d823-5bfa-460a-ba27-2dfc2d7f821e,Namespace:calico-system,Attempt:0,}" Sep 9 00:17:00.075961 containerd[1558]: time="2025-09-09T00:17:00.075683868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698c5d8dcf-cmdld,Uid:918ad19a-fd04-472b-9afa-e50f5529c7be,Namespace:calico-system,Attempt:0,}" Sep 9 00:17:00.167327 containerd[1558]: time="2025-09-09T00:17:00.167149278Z" level=error msg="Failed to destroy network for sandbox \"f5921a4ec3246eee8ee864b8ad798c29d1d974c44f0a2468347a7caab5e3f5f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.187221 containerd[1558]: time="2025-09-09T00:17:00.172507743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-srhmj,Uid:28474118-60c7-451d-8c91-1c7c0d29a234,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5921a4ec3246eee8ee864b8ad798c29d1d974c44f0a2468347a7caab5e3f5f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.187947 containerd[1558]: time="2025-09-09T00:17:00.187907998Z" level=error msg="Failed to destroy network for sandbox \"075f5df9a58473080f559ae27e585b16eed7136da8660766db736a68640484d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.189403 containerd[1558]: time="2025-09-09T00:17:00.189364316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4946788-2b724,Uid:eb7e7c20-7975-493d-9afe-960b50789f47,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"075f5df9a58473080f559ae27e585b16eed7136da8660766db736a68640484d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.189483 containerd[1558]: time="2025-09-09T00:17:00.183585322Z" level=error msg="Failed to destroy network for sandbox \"872db7025755d3b08dcc7b5846d70b496cc84009e7a49cde524cba3f94374a7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.207900 containerd[1558]: time="2025-09-09T00:17:00.183617512Z" level=error msg="Failed to destroy network for sandbox \"f1e6551884169df6de60010ea92d8ac8289a1751ee1777d3f651f66adb8da89b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.211711 containerd[1558]: time="2025-09-09T00:17:00.210395429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jnwpv,Uid:e6cfb95f-994f-481e-9a63-7060331a5492,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"872db7025755d3b08dcc7b5846d70b496cc84009e7a49cde524cba3f94374a7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.211711 containerd[1558]: time="2025-09-09T00:17:00.211504033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bqb6,Uid:4e904ceb-5b62-41f7-9cc4-30a41a8aca37,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e6551884169df6de60010ea92d8ac8289a1751ee1777d3f651f66adb8da89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.213614 containerd[1558]: time="2025-09-09T00:17:00.213573013Z" level=error msg="Failed to destroy network for sandbox \"869cc74a05595ac0fae40859738cf4c8422e943743ea89dd6ab13a4498fe6897\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.217994 containerd[1558]: time="2025-09-09T00:17:00.217936006Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6468c59944-bskfc,Uid:b0f2d823-5bfa-460a-ba27-2dfc2d7f821e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"869cc74a05595ac0fae40859738cf4c8422e943743ea89dd6ab13a4498fe6897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.220105 kubelet[2719]: E0909 00:17:00.220050 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"872db7025755d3b08dcc7b5846d70b496cc84009e7a49cde524cba3f94374a7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.220105 kubelet[2719]: E0909 00:17:00.220096 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075f5df9a58473080f559ae27e585b16eed7136da8660766db736a68640484d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.220221 kubelet[2719]: E0909 00:17:00.220161 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"872db7025755d3b08dcc7b5846d70b496cc84009e7a49cde524cba3f94374a7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-jnwpv" Sep 9 00:17:00.220221 kubelet[2719]: E0909 00:17:00.220177 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075f5df9a58473080f559ae27e585b16eed7136da8660766db736a68640484d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c4946788-2b724" Sep 9 00:17:00.220221 kubelet[2719]: E0909 00:17:00.220190 2719 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"872db7025755d3b08dcc7b5846d70b496cc84009e7a49cde524cba3f94374a7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-jnwpv" Sep 9 00:17:00.220301 kubelet[2719]: E0909 00:17:00.220203 2719 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"075f5df9a58473080f559ae27e585b16eed7136da8660766db736a68640484d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c4946788-2b724" Sep 9 00:17:00.220301 kubelet[2719]: E0909 00:17:00.220233 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-jnwpv_calico-system(e6cfb95f-994f-481e-9a63-7060331a5492)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-jnwpv_calico-system(e6cfb95f-994f-481e-9a63-7060331a5492)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"872db7025755d3b08dcc7b5846d70b496cc84009e7a49cde524cba3f94374a7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-jnwpv" podUID="e6cfb95f-994f-481e-9a63-7060331a5492" Sep 9 00:17:00.220301 kubelet[2719]: E0909 00:17:00.220276 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67c4946788-2b724_calico-apiserver(eb7e7c20-7975-493d-9afe-960b50789f47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67c4946788-2b724_calico-apiserver(eb7e7c20-7975-493d-9afe-960b50789f47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"075f5df9a58473080f559ae27e585b16eed7136da8660766db736a68640484d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c4946788-2b724" podUID="eb7e7c20-7975-493d-9afe-960b50789f47" Sep 9 00:17:00.220415 kubelet[2719]: E0909 00:17:00.220050 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5921a4ec3246eee8ee864b8ad798c29d1d974c44f0a2468347a7caab5e3f5f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.220415 kubelet[2719]: E0909 00:17:00.220352 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5921a4ec3246eee8ee864b8ad798c29d1d974c44f0a2468347a7caab5e3f5f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-srhmj" Sep 9 00:17:00.220415 kubelet[2719]: E0909 00:17:00.220374 2719 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5921a4ec3246eee8ee864b8ad798c29d1d974c44f0a2468347a7caab5e3f5f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-srhmj" Sep 9 00:17:00.220494 kubelet[2719]: E0909 00:17:00.220433 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-srhmj_kube-system(28474118-60c7-451d-8c91-1c7c0d29a234)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-srhmj_kube-system(28474118-60c7-451d-8c91-1c7c0d29a234)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5921a4ec3246eee8ee864b8ad798c29d1d974c44f0a2468347a7caab5e3f5f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-srhmj" podUID="28474118-60c7-451d-8c91-1c7c0d29a234" Sep 9 00:17:00.220494 kubelet[2719]: E0909 00:17:00.220050 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"869cc74a05595ac0fae40859738cf4c8422e943743ea89dd6ab13a4498fe6897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.220494 kubelet[2719]: E0909 00:17:00.220489 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"869cc74a05595ac0fae40859738cf4c8422e943743ea89dd6ab13a4498fe6897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6468c59944-bskfc" Sep 9 00:17:00.220584 kubelet[2719]: E0909 00:17:00.220502 2719 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"869cc74a05595ac0fae40859738cf4c8422e943743ea89dd6ab13a4498fe6897\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6468c59944-bskfc" Sep 9 00:17:00.220584 kubelet[2719]: E0909 00:17:00.220532 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6468c59944-bskfc_calico-system(b0f2d823-5bfa-460a-ba27-2dfc2d7f821e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6468c59944-bskfc_calico-system(b0f2d823-5bfa-460a-ba27-2dfc2d7f821e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"869cc74a05595ac0fae40859738cf4c8422e943743ea89dd6ab13a4498fe6897\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6468c59944-bskfc" podUID="b0f2d823-5bfa-460a-ba27-2dfc2d7f821e" Sep 9 00:17:00.220584 kubelet[2719]: E0909 00:17:00.220096 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e6551884169df6de60010ea92d8ac8289a1751ee1777d3f651f66adb8da89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.220671 kubelet[2719]: E0909 00:17:00.220577 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e6551884169df6de60010ea92d8ac8289a1751ee1777d3f651f66adb8da89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4bqb6" Sep 9 00:17:00.220671 kubelet[2719]: E0909 00:17:00.220601 2719 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1e6551884169df6de60010ea92d8ac8289a1751ee1777d3f651f66adb8da89b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4bqb6" Sep 9 00:17:00.220671 kubelet[2719]: E0909 00:17:00.220634 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4bqb6_kube-system(4e904ceb-5b62-41f7-9cc4-30a41a8aca37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4bqb6_kube-system(4e904ceb-5b62-41f7-9cc4-30a41a8aca37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1e6551884169df6de60010ea92d8ac8289a1751ee1777d3f651f66adb8da89b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4bqb6" podUID="4e904ceb-5b62-41f7-9cc4-30a41a8aca37" Sep 9 00:17:00.241322 containerd[1558]: time="2025-09-09T00:17:00.241252285Z" level=error msg="Failed to destroy network for sandbox \"0faa82b99824ffc788b825f9ee596614d35f40f70b1b3017693fe64235274227\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.244136 containerd[1558]: time="2025-09-09T00:17:00.244034396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-698c5d8dcf-cmdld,Uid:918ad19a-fd04-472b-9afa-e50f5529c7be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0faa82b99824ffc788b825f9ee596614d35f40f70b1b3017693fe64235274227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.244286 kubelet[2719]: E0909 00:17:00.244247 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0faa82b99824ffc788b825f9ee596614d35f40f70b1b3017693fe64235274227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.244360 kubelet[2719]: E0909 00:17:00.244310 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0faa82b99824ffc788b825f9ee596614d35f40f70b1b3017693fe64235274227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-698c5d8dcf-cmdld" Sep 9 00:17:00.244360 kubelet[2719]: E0909 00:17:00.244336 2719 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0faa82b99824ffc788b825f9ee596614d35f40f70b1b3017693fe64235274227\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-698c5d8dcf-cmdld" Sep 9 00:17:00.244429 kubelet[2719]: E0909 00:17:00.244376 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-698c5d8dcf-cmdld_calico-system(918ad19a-fd04-472b-9afa-e50f5529c7be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-698c5d8dcf-cmdld_calico-system(918ad19a-fd04-472b-9afa-e50f5529c7be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0faa82b99824ffc788b825f9ee596614d35f40f70b1b3017693fe64235274227\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-698c5d8dcf-cmdld" podUID="918ad19a-fd04-472b-9afa-e50f5529c7be" Sep 9 00:17:00.246694 containerd[1558]: time="2025-09-09T00:17:00.246658740Z" level=error msg="Failed to destroy network for sandbox \"9f159c78ea035d97e5cafbbe50f7281f145c985dfdda7ac821e314d4f91eaf66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.247889 containerd[1558]: time="2025-09-09T00:17:00.247848197Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4946788-5882s,Uid:a796ff90-d29f-4851-a7cb-f79a9c1eeee0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f159c78ea035d97e5cafbbe50f7281f145c985dfdda7ac821e314d4f91eaf66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.248027 kubelet[2719]: E0909 00:17:00.247997 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f159c78ea035d97e5cafbbe50f7281f145c985dfdda7ac821e314d4f91eaf66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.248075 kubelet[2719]: E0909 00:17:00.248042 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f159c78ea035d97e5cafbbe50f7281f145c985dfdda7ac821e314d4f91eaf66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c4946788-5882s" Sep 9 00:17:00.248075 kubelet[2719]: E0909 00:17:00.248059 2719 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f159c78ea035d97e5cafbbe50f7281f145c985dfdda7ac821e314d4f91eaf66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67c4946788-5882s" Sep 9 00:17:00.248121 kubelet[2719]: E0909 00:17:00.248094 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67c4946788-5882s_calico-apiserver(a796ff90-d29f-4851-a7cb-f79a9c1eeee0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67c4946788-5882s_calico-apiserver(a796ff90-d29f-4851-a7cb-f79a9c1eeee0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f159c78ea035d97e5cafbbe50f7281f145c985dfdda7ac821e314d4f91eaf66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67c4946788-5882s" podUID="a796ff90-d29f-4851-a7cb-f79a9c1eeee0" Sep 9 00:17:00.339722 systemd[1]: Created slice kubepods-besteffort-pod08e66685_6e52_4173_9bbb_66e69f63a998.slice - libcontainer container kubepods-besteffort-pod08e66685_6e52_4173_9bbb_66e69f63a998.slice. Sep 9 00:17:00.343468 containerd[1558]: time="2025-09-09T00:17:00.343402745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gwrd,Uid:08e66685-6e52-4173-9bbb-66e69f63a998,Namespace:calico-system,Attempt:0,}" Sep 9 00:17:00.392494 containerd[1558]: time="2025-09-09T00:17:00.392424996Z" level=error msg="Failed to destroy network for sandbox \"32d8a126d752080fbfc29c1f265d73663dd6bf0ac08fd8c11881c6ffec6ad1c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.393797 containerd[1558]: time="2025-09-09T00:17:00.393756739Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gwrd,Uid:08e66685-6e52-4173-9bbb-66e69f63a998,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d8a126d752080fbfc29c1f265d73663dd6bf0ac08fd8c11881c6ffec6ad1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.394093 kubelet[2719]: E0909 00:17:00.394038 2719 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d8a126d752080fbfc29c1f265d73663dd6bf0ac08fd8c11881c6ffec6ad1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:17:00.394167 kubelet[2719]: E0909 00:17:00.394122 2719 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d8a126d752080fbfc29c1f265d73663dd6bf0ac08fd8c11881c6ffec6ad1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6gwrd" Sep 9 00:17:00.394167 kubelet[2719]: E0909 00:17:00.394159 2719 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d8a126d752080fbfc29c1f265d73663dd6bf0ac08fd8c11881c6ffec6ad1c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6gwrd" Sep 9 00:17:00.394250 kubelet[2719]: E0909 00:17:00.394222 2719 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6gwrd_calico-system(08e66685-6e52-4173-9bbb-66e69f63a998)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6gwrd_calico-system(08e66685-6e52-4173-9bbb-66e69f63a998)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32d8a126d752080fbfc29c1f265d73663dd6bf0ac08fd8c11881c6ffec6ad1c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6gwrd" podUID="08e66685-6e52-4173-9bbb-66e69f63a998" Sep 9 00:17:00.417712 containerd[1558]: time="2025-09-09T00:17:00.417671203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:17:08.907456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4179078445.mount: Deactivated successfully. Sep 9 00:17:10.225548 containerd[1558]: time="2025-09-09T00:17:10.225470737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:10.226516 containerd[1558]: time="2025-09-09T00:17:10.226449786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:17:10.227797 containerd[1558]: time="2025-09-09T00:17:10.227763493Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:10.229956 containerd[1558]: time="2025-09-09T00:17:10.229915985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:10.230432 containerd[1558]: time="2025-09-09T00:17:10.230406096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.812540037s" Sep 9 00:17:10.230432 containerd[1558]: time="2025-09-09T00:17:10.230431814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:17:10.241052 containerd[1558]: time="2025-09-09T00:17:10.241001783Z" level=info msg="CreateContainer within sandbox \"366b6f0f3e7aa8f4111bea2bb200114bf5c86530bc024c5132c61f95e825ca5a\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:17:10.259911 containerd[1558]: time="2025-09-09T00:17:10.259860828Z" level=info msg="Container bd53949aede677d5748eb161b441c726670ddbde9d0b0c8d3bd1bfabaeafdc73: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:17:10.282757 containerd[1558]: time="2025-09-09T00:17:10.282693745Z" level=info msg="CreateContainer within sandbox \"366b6f0f3e7aa8f4111bea2bb200114bf5c86530bc024c5132c61f95e825ca5a\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bd53949aede677d5748eb161b441c726670ddbde9d0b0c8d3bd1bfabaeafdc73\"" Sep 9 00:17:10.283341 containerd[1558]: time="2025-09-09T00:17:10.283314632Z" level=info msg="StartContainer for \"bd53949aede677d5748eb161b441c726670ddbde9d0b0c8d3bd1bfabaeafdc73\"" Sep 9 00:17:10.300609 containerd[1558]: time="2025-09-09T00:17:10.300550939Z" level=info msg="connecting to shim bd53949aede677d5748eb161b441c726670ddbde9d0b0c8d3bd1bfabaeafdc73" address="unix:///run/containerd/s/0a9455bb17719ad7e2335374d629fa3ba2031603b95589083176beaae8e521f9" protocol=ttrpc version=3 Sep 9 00:17:10.326874 systemd[1]: Started cri-containerd-bd53949aede677d5748eb161b441c726670ddbde9d0b0c8d3bd1bfabaeafdc73.scope - libcontainer container bd53949aede677d5748eb161b441c726670ddbde9d0b0c8d3bd1bfabaeafdc73. Sep 9 00:17:10.378022 containerd[1558]: time="2025-09-09T00:17:10.377971165Z" level=info msg="StartContainer for \"bd53949aede677d5748eb161b441c726670ddbde9d0b0c8d3bd1bfabaeafdc73\" returns successfully" Sep 9 00:17:10.450982 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:17:10.451682 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:17:10.476508 kubelet[2719]: I0909 00:17:10.476126 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vpmmj" podStartSLOduration=1.26567805 podStartE2EDuration="22.474330927s" podCreationTimestamp="2025-09-09 00:16:48 +0000 UTC" firstStartedPulling="2025-09-09 00:16:49.022420933 +0000 UTC m=+19.825405044" lastFinishedPulling="2025-09-09 00:17:10.23107381 +0000 UTC m=+41.034057921" observedRunningTime="2025-09-09 00:17:10.472934936 +0000 UTC m=+41.275919057" watchObservedRunningTime="2025-09-09 00:17:10.474330927 +0000 UTC m=+41.277315039" Sep 9 00:17:10.618168 kubelet[2719]: I0909 00:17:10.618097 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/918ad19a-fd04-472b-9afa-e50f5529c7be-whisker-ca-bundle\") pod \"918ad19a-fd04-472b-9afa-e50f5529c7be\" (UID: \"918ad19a-fd04-472b-9afa-e50f5529c7be\") " Sep 9 00:17:10.618168 kubelet[2719]: I0909 00:17:10.618184 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/918ad19a-fd04-472b-9afa-e50f5529c7be-whisker-backend-key-pair\") pod \"918ad19a-fd04-472b-9afa-e50f5529c7be\" (UID: \"918ad19a-fd04-472b-9afa-e50f5529c7be\") " Sep 9 00:17:10.622268 kubelet[2719]: I0909 00:17:10.622201 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/918ad19a-fd04-472b-9afa-e50f5529c7be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "918ad19a-fd04-472b-9afa-e50f5529c7be" (UID: "918ad19a-fd04-472b-9afa-e50f5529c7be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:17:10.625931 kubelet[2719]: I0909 00:17:10.625896 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/918ad19a-fd04-472b-9afa-e50f5529c7be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "918ad19a-fd04-472b-9afa-e50f5529c7be" (UID: "918ad19a-fd04-472b-9afa-e50f5529c7be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:17:10.637564 containerd[1558]: time="2025-09-09T00:17:10.637519927Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd53949aede677d5748eb161b441c726670ddbde9d0b0c8d3bd1bfabaeafdc73\" id:\"e2d7ad15f2d63d34b9873e5209b16a48a23426ab3ff4a4056435a31a4580eae6\" pid:3844 exit_status:1 exited_at:{seconds:1757377030 nanos:637108765}" Sep 9 00:17:10.718760 kubelet[2719]: I0909 00:17:10.718674 2719 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h4wg\" (UniqueName: \"kubernetes.io/projected/918ad19a-fd04-472b-9afa-e50f5529c7be-kube-api-access-7h4wg\") pod \"918ad19a-fd04-472b-9afa-e50f5529c7be\" (UID: \"918ad19a-fd04-472b-9afa-e50f5529c7be\") " Sep 9 00:17:10.719105 kubelet[2719]: I0909 00:17:10.719054 2719 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/918ad19a-fd04-472b-9afa-e50f5529c7be-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:17:10.719105 kubelet[2719]: I0909 00:17:10.719089 2719 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/918ad19a-fd04-472b-9afa-e50f5529c7be-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:17:10.724657 kubelet[2719]: I0909 00:17:10.724587 2719 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/918ad19a-fd04-472b-9afa-e50f5529c7be-kube-api-access-7h4wg" (OuterVolumeSpecName: "kube-api-access-7h4wg") pod "918ad19a-fd04-472b-9afa-e50f5529c7be" (UID: "918ad19a-fd04-472b-9afa-e50f5529c7be"). InnerVolumeSpecName "kube-api-access-7h4wg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:17:10.820407 kubelet[2719]: I0909 00:17:10.820246 2719 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7h4wg\" (UniqueName: \"kubernetes.io/projected/918ad19a-fd04-472b-9afa-e50f5529c7be-kube-api-access-7h4wg\") on node \"localhost\" DevicePath \"\"" Sep 9 00:17:11.237262 systemd[1]: var-lib-kubelet-pods-918ad19a\x2dfd04\x2d472b\x2d9afa\x2de50f5529c7be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7h4wg.mount: Deactivated successfully. Sep 9 00:17:11.237391 systemd[1]: var-lib-kubelet-pods-918ad19a\x2dfd04\x2d472b\x2d9afa\x2de50f5529c7be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:17:11.338579 containerd[1558]: time="2025-09-09T00:17:11.338198214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gwrd,Uid:08e66685-6e52-4173-9bbb-66e69f63a998,Namespace:calico-system,Attempt:0,}" Sep 9 00:17:11.344120 systemd[1]: Removed slice kubepods-besteffort-pod918ad19a_fd04_472b_9afa_e50f5529c7be.slice - libcontainer container kubepods-besteffort-pod918ad19a_fd04_472b_9afa_e50f5529c7be.slice. Sep 9 00:17:11.539222 systemd-networkd[1479]: calid0efacde22a: Link UP Sep 9 00:17:11.540667 systemd-networkd[1479]: calid0efacde22a: Gained carrier Sep 9 00:17:11.545322 systemd[1]: Created slice kubepods-besteffort-pod3d7b791d_2319_4219_997e_2285e4890801.slice - libcontainer container kubepods-besteffort-pod3d7b791d_2319_4219_997e_2285e4890801.slice. Sep 9 00:17:11.585346 containerd[1558]: 2025-09-09 00:17:11.366 [INFO][3878] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:17:11.585346 containerd[1558]: 2025-09-09 00:17:11.386 [INFO][3878] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6gwrd-eth0 csi-node-driver- calico-system 08e66685-6e52-4173-9bbb-66e69f63a998 707 0 2025-09-09 00:16:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6gwrd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid0efacde22a [] [] }} ContainerID="be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" Namespace="calico-system" Pod="csi-node-driver-6gwrd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gwrd-" Sep 9 00:17:11.585346 containerd[1558]: 2025-09-09 00:17:11.387 [INFO][3878] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" Namespace="calico-system" Pod="csi-node-driver-6gwrd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gwrd-eth0" Sep 9 00:17:11.585346 containerd[1558]: 2025-09-09 00:17:11.449 [INFO][3893] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" HandleID="k8s-pod-network.be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" Workload="localhost-k8s-csi--node--driver--6gwrd-eth0" Sep 9 00:17:11.585637 containerd[1558]: 2025-09-09 00:17:11.451 [INFO][3893] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" HandleID="k8s-pod-network.be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" Workload="localhost-k8s-csi--node--driver--6gwrd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000bee00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6gwrd", "timestamp":"2025-09-09 00:17:11.449870644 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:17:11.585637 containerd[1558]: 2025-09-09 00:17:11.451 [INFO][3893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:17:11.585637 containerd[1558]: 2025-09-09 00:17:11.451 [INFO][3893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:17:11.585637 containerd[1558]: 2025-09-09 00:17:11.451 [INFO][3893] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:17:11.585637 containerd[1558]: 2025-09-09 00:17:11.464 [INFO][3893] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" host="localhost" Sep 9 00:17:11.585637 containerd[1558]: 2025-09-09 00:17:11.478 [INFO][3893] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:17:11.585637 containerd[1558]: 2025-09-09 00:17:11.488 [INFO][3893] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:17:11.585637 containerd[1558]: 2025-09-09 00:17:11.493 [INFO][3893] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:11.585637 containerd[1558]: 2025-09-09 00:17:11.497 [INFO][3893] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:11.585637 containerd[1558]: 2025-09-09 00:17:11.497 [INFO][3893] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" host="localhost" Sep 9 00:17:11.585957 containerd[1558]: 2025-09-09 00:17:11.499 [INFO][3893] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d Sep 9 00:17:11.585957 containerd[1558]: 2025-09-09 00:17:11.505 [INFO][3893] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" host="localhost" Sep 9 00:17:11.585957 containerd[1558]: 2025-09-09 00:17:11.515 [INFO][3893] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" host="localhost" Sep 9 00:17:11.585957 containerd[1558]: 2025-09-09 00:17:11.516 [INFO][3893] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" host="localhost" Sep 9 00:17:11.585957 containerd[1558]: 2025-09-09 00:17:11.516 [INFO][3893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:17:11.585957 containerd[1558]: 2025-09-09 00:17:11.516 [INFO][3893] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" HandleID="k8s-pod-network.be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" Workload="localhost-k8s-csi--node--driver--6gwrd-eth0" Sep 9 00:17:11.586136 containerd[1558]: 2025-09-09 00:17:11.520 [INFO][3878] cni-plugin/k8s.go 418: Populated endpoint ContainerID="be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" Namespace="calico-system" Pod="csi-node-driver-6gwrd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gwrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6gwrd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08e66685-6e52-4173-9bbb-66e69f63a998", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6gwrd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid0efacde22a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:11.586198 containerd[1558]: 2025-09-09 00:17:11.520 [INFO][3878] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" Namespace="calico-system" Pod="csi-node-driver-6gwrd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gwrd-eth0" Sep 9 00:17:11.586198 containerd[1558]: 2025-09-09 00:17:11.520 [INFO][3878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0efacde22a ContainerID="be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" Namespace="calico-system" Pod="csi-node-driver-6gwrd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gwrd-eth0" Sep 9 00:17:11.586198 containerd[1558]: 2025-09-09 00:17:11.545 [INFO][3878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" Namespace="calico-system" Pod="csi-node-driver-6gwrd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gwrd-eth0" Sep 9 00:17:11.586276 containerd[1558]: 2025-09-09 00:17:11.559 [INFO][3878] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" Namespace="calico-system" Pod="csi-node-driver-6gwrd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gwrd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6gwrd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"08e66685-6e52-4173-9bbb-66e69f63a998", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d", Pod:"csi-node-driver-6gwrd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid0efacde22a", MAC:"f2:59:bc:b9:ca:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:11.586344 containerd[1558]: 2025-09-09 00:17:11.579 [INFO][3878] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" Namespace="calico-system" Pod="csi-node-driver-6gwrd" WorkloadEndpoint="localhost-k8s-csi--node--driver--6gwrd-eth0" Sep 9 00:17:11.587711 containerd[1558]: time="2025-09-09T00:17:11.587634397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd53949aede677d5748eb161b441c726670ddbde9d0b0c8d3bd1bfabaeafdc73\" id:\"f189426122af9d41af572182d6ace4a32f9d9a5ffd5a6a39a693b8c01a435945\" pid:3912 exit_status:1 exited_at:{seconds:1757377031 nanos:587213577}" Sep 9 00:17:11.627842 kubelet[2719]: I0909 00:17:11.627767 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d7b791d-2319-4219-997e-2285e4890801-whisker-ca-bundle\") pod \"whisker-7b98875998-zd8k2\" (UID: \"3d7b791d-2319-4219-997e-2285e4890801\") " pod="calico-system/whisker-7b98875998-zd8k2" Sep 9 00:17:11.627842 kubelet[2719]: I0909 00:17:11.627831 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3d7b791d-2319-4219-997e-2285e4890801-whisker-backend-key-pair\") pod \"whisker-7b98875998-zd8k2\" (UID: \"3d7b791d-2319-4219-997e-2285e4890801\") " pod="calico-system/whisker-7b98875998-zd8k2" Sep 9 00:17:11.628284 kubelet[2719]: I0909 00:17:11.627856 2719 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn245\" (UniqueName: \"kubernetes.io/projected/3d7b791d-2319-4219-997e-2285e4890801-kube-api-access-kn245\") pod \"whisker-7b98875998-zd8k2\" (UID: \"3d7b791d-2319-4219-997e-2285e4890801\") " pod="calico-system/whisker-7b98875998-zd8k2" Sep 9 00:17:11.660115 containerd[1558]: time="2025-09-09T00:17:11.659675901Z" level=info msg="connecting to shim be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d" address="unix:///run/containerd/s/95ef2ba0b0361566330a2badd3ca378015da076fed885a6a7483c81f5e13544f" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:17:11.686963 systemd[1]: Started cri-containerd-be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d.scope - libcontainer container be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d. Sep 9 00:17:11.701450 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:17:11.720615 containerd[1558]: time="2025-09-09T00:17:11.720552370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6gwrd,Uid:08e66685-6e52-4173-9bbb-66e69f63a998,Namespace:calico-system,Attempt:0,} returns sandbox id \"be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d\"" Sep 9 00:17:11.722432 containerd[1558]: time="2025-09-09T00:17:11.722391324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:17:11.851969 containerd[1558]: time="2025-09-09T00:17:11.851817860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b98875998-zd8k2,Uid:3d7b791d-2319-4219-997e-2285e4890801,Namespace:calico-system,Attempt:0,}" Sep 9 00:17:12.395933 systemd-networkd[1479]: calie758e9871c4: Link UP Sep 9 00:17:12.396173 systemd-networkd[1479]: calie758e9871c4: Gained carrier Sep 9 00:17:12.487779 containerd[1558]: 2025-09-09 00:17:11.879 [INFO][3980] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:17:12.487779 containerd[1558]: 2025-09-09 00:17:11.892 [INFO][3980] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7b98875998--zd8k2-eth0 whisker-7b98875998- calico-system 3d7b791d-2319-4219-997e-2285e4890801 894 0 2025-09-09 00:17:11 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b98875998 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7b98875998-zd8k2 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie758e9871c4 [] [] }} ContainerID="beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" Namespace="calico-system" Pod="whisker-7b98875998-zd8k2" WorkloadEndpoint="localhost-k8s-whisker--7b98875998--zd8k2-" Sep 9 00:17:12.487779 containerd[1558]: 2025-09-09 00:17:11.892 [INFO][3980] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" Namespace="calico-system" Pod="whisker-7b98875998-zd8k2" WorkloadEndpoint="localhost-k8s-whisker--7b98875998--zd8k2-eth0" Sep 9 00:17:12.487779 containerd[1558]: 2025-09-09 00:17:11.921 [INFO][3994] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" HandleID="k8s-pod-network.beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" Workload="localhost-k8s-whisker--7b98875998--zd8k2-eth0" Sep 9 00:17:12.488423 containerd[1558]: 2025-09-09 00:17:11.921 [INFO][3994] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" HandleID="k8s-pod-network.beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" Workload="localhost-k8s-whisker--7b98875998--zd8k2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7840), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7b98875998-zd8k2", "timestamp":"2025-09-09 00:17:11.921654154 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:17:12.488423 containerd[1558]: 2025-09-09 00:17:11.921 [INFO][3994] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:17:12.488423 containerd[1558]: 2025-09-09 00:17:11.921 [INFO][3994] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:17:12.488423 containerd[1558]: 2025-09-09 00:17:11.921 [INFO][3994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:17:12.488423 containerd[1558]: 2025-09-09 00:17:11.927 [INFO][3994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" host="localhost" Sep 9 00:17:12.488423 containerd[1558]: 2025-09-09 00:17:11.931 [INFO][3994] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:17:12.488423 containerd[1558]: 2025-09-09 00:17:11.934 [INFO][3994] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:17:12.488423 containerd[1558]: 2025-09-09 00:17:11.936 [INFO][3994] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:12.488423 containerd[1558]: 2025-09-09 00:17:11.938 [INFO][3994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:12.488423 containerd[1558]: 2025-09-09 00:17:11.938 [INFO][3994] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" host="localhost" Sep 9 00:17:12.488643 containerd[1558]: 2025-09-09 00:17:11.939 [INFO][3994] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a Sep 9 00:17:12.488643 containerd[1558]: 2025-09-09 00:17:12.244 [INFO][3994] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" host="localhost" Sep 9 00:17:12.488643 containerd[1558]: 2025-09-09 00:17:12.389 [INFO][3994] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" host="localhost" Sep 9 00:17:12.488643 containerd[1558]: 2025-09-09 00:17:12.389 [INFO][3994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" host="localhost" Sep 9 00:17:12.488643 containerd[1558]: 2025-09-09 00:17:12.389 [INFO][3994] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:17:12.488643 containerd[1558]: 2025-09-09 00:17:12.389 [INFO][3994] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" HandleID="k8s-pod-network.beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" Workload="localhost-k8s-whisker--7b98875998--zd8k2-eth0" Sep 9 00:17:12.488787 containerd[1558]: 2025-09-09 00:17:12.392 [INFO][3980] cni-plugin/k8s.go 418: Populated endpoint ContainerID="beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" Namespace="calico-system" Pod="whisker-7b98875998-zd8k2" WorkloadEndpoint="localhost-k8s-whisker--7b98875998--zd8k2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b98875998--zd8k2-eth0", GenerateName:"whisker-7b98875998-", Namespace:"calico-system", SelfLink:"", UID:"3d7b791d-2319-4219-997e-2285e4890801", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 17, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b98875998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7b98875998-zd8k2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie758e9871c4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:12.488787 containerd[1558]: 2025-09-09 00:17:12.392 [INFO][3980] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" Namespace="calico-system" Pod="whisker-7b98875998-zd8k2" WorkloadEndpoint="localhost-k8s-whisker--7b98875998--zd8k2-eth0" Sep 9 00:17:12.488862 containerd[1558]: 2025-09-09 00:17:12.393 [INFO][3980] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie758e9871c4 ContainerID="beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" Namespace="calico-system" Pod="whisker-7b98875998-zd8k2" WorkloadEndpoint="localhost-k8s-whisker--7b98875998--zd8k2-eth0" Sep 9 00:17:12.488862 containerd[1558]: 2025-09-09 00:17:12.396 [INFO][3980] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" Namespace="calico-system" Pod="whisker-7b98875998-zd8k2" WorkloadEndpoint="localhost-k8s-whisker--7b98875998--zd8k2-eth0" Sep 9 00:17:12.488906 containerd[1558]: 2025-09-09 00:17:12.396 [INFO][3980] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" Namespace="calico-system" Pod="whisker-7b98875998-zd8k2" WorkloadEndpoint="localhost-k8s-whisker--7b98875998--zd8k2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b98875998--zd8k2-eth0", GenerateName:"whisker-7b98875998-", Namespace:"calico-system", SelfLink:"", UID:"3d7b791d-2319-4219-997e-2285e4890801", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 17, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b98875998", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a", Pod:"whisker-7b98875998-zd8k2", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie758e9871c4", MAC:"c6:ec:41:3a:4f:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:12.488962 containerd[1558]: 2025-09-09 00:17:12.483 [INFO][3980] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" Namespace="calico-system" Pod="whisker-7b98875998-zd8k2" WorkloadEndpoint="localhost-k8s-whisker--7b98875998--zd8k2-eth0" Sep 9 00:17:12.870300 containerd[1558]: time="2025-09-09T00:17:12.870241440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4946788-5882s,Uid:a796ff90-d29f-4851-a7cb-f79a9c1eeee0,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:17:13.224913 systemd-networkd[1479]: calid0efacde22a: Gained IPv6LL Sep 9 00:17:13.333909 kubelet[2719]: E0909 00:17:13.333864 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:13.334919 containerd[1558]: time="2025-09-09T00:17:13.334858407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6468c59944-bskfc,Uid:b0f2d823-5bfa-460a-ba27-2dfc2d7f821e,Namespace:calico-system,Attempt:0,}" Sep 9 00:17:13.335264 containerd[1558]: time="2025-09-09T00:17:13.335050258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bqb6,Uid:4e904ceb-5b62-41f7-9cc4-30a41a8aca37,Namespace:kube-system,Attempt:0,}" Sep 9 00:17:13.336562 kubelet[2719]: I0909 00:17:13.336502 2719 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="918ad19a-fd04-472b-9afa-e50f5529c7be" path="/var/lib/kubelet/pods/918ad19a-fd04-472b-9afa-e50f5529c7be/volumes" Sep 9 00:17:14.120954 systemd-networkd[1479]: calie758e9871c4: Gained IPv6LL Sep 9 00:17:14.333849 containerd[1558]: time="2025-09-09T00:17:14.333717049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jnwpv,Uid:e6cfb95f-994f-481e-9a63-7060331a5492,Namespace:calico-system,Attempt:0,}" Sep 9 00:17:14.334361 containerd[1558]: time="2025-09-09T00:17:14.334140163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4946788-2b724,Uid:eb7e7c20-7975-493d-9afe-960b50789f47,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:17:14.884838 containerd[1558]: time="2025-09-09T00:17:14.884777168Z" level=info msg="connecting to shim beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a" address="unix:///run/containerd/s/00751fefa53b0519643b1dd3a38c0a25a1b824317cdb9e6207ce6d88712810c2" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:17:14.924930 systemd[1]: Started cri-containerd-beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a.scope - libcontainer container beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a. Sep 9 00:17:14.944199 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:17:14.948209 systemd-networkd[1479]: cali05cc8fe7f07: Link UP Sep 9 00:17:14.950026 systemd-networkd[1479]: cali05cc8fe7f07: Gained carrier Sep 9 00:17:14.970920 containerd[1558]: 2025-09-09 00:17:14.730 [INFO][4132] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:17:14.970920 containerd[1558]: 2025-09-09 00:17:14.741 [INFO][4132] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0 calico-kube-controllers-6468c59944- calico-system b0f2d823-5bfa-460a-ba27-2dfc2d7f821e 817 0 2025-09-09 00:16:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6468c59944 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6468c59944-bskfc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali05cc8fe7f07 [] [] }} ContainerID="7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" Namespace="calico-system" Pod="calico-kube-controllers-6468c59944-bskfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6468c59944--bskfc-" Sep 9 00:17:14.970920 containerd[1558]: 2025-09-09 00:17:14.741 [INFO][4132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" Namespace="calico-system" Pod="calico-kube-controllers-6468c59944-bskfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0" Sep 9 00:17:14.970920 containerd[1558]: 2025-09-09 00:17:14.862 [INFO][4147] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" HandleID="k8s-pod-network.7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" Workload="localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0" Sep 9 00:17:14.971236 containerd[1558]: 2025-09-09 00:17:14.863 [INFO][4147] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" HandleID="k8s-pod-network.7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" Workload="localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041ead0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6468c59944-bskfc", "timestamp":"2025-09-09 00:17:14.862972184 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:17:14.971236 containerd[1558]: 2025-09-09 00:17:14.863 [INFO][4147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:17:14.971236 containerd[1558]: 2025-09-09 00:17:14.863 [INFO][4147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:17:14.971236 containerd[1558]: 2025-09-09 00:17:14.863 [INFO][4147] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:17:14.971236 containerd[1558]: 2025-09-09 00:17:14.884 [INFO][4147] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" host="localhost" Sep 9 00:17:14.971236 containerd[1558]: 2025-09-09 00:17:14.891 [INFO][4147] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:17:14.971236 containerd[1558]: 2025-09-09 00:17:14.896 [INFO][4147] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:17:14.971236 containerd[1558]: 2025-09-09 00:17:14.900 [INFO][4147] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:14.971236 containerd[1558]: 2025-09-09 00:17:14.903 [INFO][4147] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:14.971236 containerd[1558]: 2025-09-09 00:17:14.903 [INFO][4147] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" host="localhost" Sep 9 00:17:14.971577 containerd[1558]: 2025-09-09 00:17:14.907 [INFO][4147] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595 Sep 9 00:17:14.971577 containerd[1558]: 2025-09-09 00:17:14.913 [INFO][4147] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" host="localhost" Sep 9 00:17:14.971577 containerd[1558]: 2025-09-09 00:17:14.925 [INFO][4147] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" host="localhost" Sep 9 00:17:14.971577 containerd[1558]: 2025-09-09 00:17:14.925 [INFO][4147] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" host="localhost" Sep 9 00:17:14.971577 containerd[1558]: 2025-09-09 00:17:14.926 [INFO][4147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:17:14.971577 containerd[1558]: 2025-09-09 00:17:14.926 [INFO][4147] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" HandleID="k8s-pod-network.7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" Workload="localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0" Sep 9 00:17:14.971718 containerd[1558]: 2025-09-09 00:17:14.940 [INFO][4132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" Namespace="calico-system" Pod="calico-kube-controllers-6468c59944-bskfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0", GenerateName:"calico-kube-controllers-6468c59944-", Namespace:"calico-system", SelfLink:"", UID:"b0f2d823-5bfa-460a-ba27-2dfc2d7f821e", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6468c59944", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6468c59944-bskfc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05cc8fe7f07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:14.971807 containerd[1558]: 2025-09-09 00:17:14.941 [INFO][4132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" Namespace="calico-system" Pod="calico-kube-controllers-6468c59944-bskfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0" Sep 9 00:17:14.971807 containerd[1558]: 2025-09-09 00:17:14.941 [INFO][4132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05cc8fe7f07 ContainerID="7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" Namespace="calico-system" Pod="calico-kube-controllers-6468c59944-bskfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0" Sep 9 00:17:14.971807 containerd[1558]: 2025-09-09 00:17:14.951 [INFO][4132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" Namespace="calico-system" Pod="calico-kube-controllers-6468c59944-bskfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0" Sep 9 00:17:14.971890 containerd[1558]: 2025-09-09 00:17:14.954 [INFO][4132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" Namespace="calico-system" Pod="calico-kube-controllers-6468c59944-bskfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0", GenerateName:"calico-kube-controllers-6468c59944-", Namespace:"calico-system", SelfLink:"", UID:"b0f2d823-5bfa-460a-ba27-2dfc2d7f821e", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6468c59944", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595", Pod:"calico-kube-controllers-6468c59944-bskfc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05cc8fe7f07", MAC:"5a:57:3c:7c:4c:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:14.971950 containerd[1558]: 2025-09-09 00:17:14.967 [INFO][4132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" Namespace="calico-system" Pod="calico-kube-controllers-6468c59944-bskfc" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6468c59944--bskfc-eth0" Sep 9 00:17:15.012663 containerd[1558]: time="2025-09-09T00:17:15.012590793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b98875998-zd8k2,Uid:3d7b791d-2319-4219-997e-2285e4890801,Namespace:calico-system,Attempt:0,} returns sandbox id \"beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a\"" Sep 9 00:17:15.019622 containerd[1558]: time="2025-09-09T00:17:15.019545067Z" level=info msg="connecting to shim 7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595" address="unix:///run/containerd/s/ab5eae72c39428380a8cbb0ba8a5db6cbc013fc946c8a4a7ea18d8100ea0ae8a" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:17:15.044576 systemd-networkd[1479]: calice1f184b04b: Link UP Sep 9 00:17:15.045209 systemd-networkd[1479]: calice1f184b04b: Gained carrier Sep 9 00:17:15.064076 containerd[1558]: 2025-09-09 00:17:14.853 [INFO][4149] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:17:15.064076 containerd[1558]: 2025-09-09 00:17:14.870 [INFO][4149] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67c4946788--5882s-eth0 calico-apiserver-67c4946788- calico-apiserver a796ff90-d29f-4851-a7cb-f79a9c1eeee0 816 0 2025-09-09 00:16:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67c4946788 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67c4946788-5882s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calice1f184b04b [] [] }} ContainerID="758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-5882s" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--5882s-" Sep 9 00:17:15.064076 containerd[1558]: 2025-09-09 00:17:14.870 [INFO][4149] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-5882s" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--5882s-eth0" Sep 9 00:17:15.064076 containerd[1558]: 2025-09-09 00:17:14.973 [INFO][4233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" HandleID="k8s-pod-network.758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" Workload="localhost-k8s-calico--apiserver--67c4946788--5882s-eth0" Sep 9 00:17:15.064359 containerd[1558]: 2025-09-09 00:17:14.973 [INFO][4233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" HandleID="k8s-pod-network.758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" Workload="localhost-k8s-calico--apiserver--67c4946788--5882s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024e380), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67c4946788-5882s", "timestamp":"2025-09-09 00:17:14.973570431 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:17:15.064359 containerd[1558]: 2025-09-09 00:17:14.974 [INFO][4233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:17:15.064359 containerd[1558]: 2025-09-09 00:17:14.974 [INFO][4233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:17:15.064359 containerd[1558]: 2025-09-09 00:17:14.974 [INFO][4233] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:17:15.064359 containerd[1558]: 2025-09-09 00:17:14.980 [INFO][4233] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" host="localhost" Sep 9 00:17:15.064359 containerd[1558]: 2025-09-09 00:17:14.992 [INFO][4233] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:17:15.064359 containerd[1558]: 2025-09-09 00:17:15.001 [INFO][4233] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:17:15.064359 containerd[1558]: 2025-09-09 00:17:15.006 [INFO][4233] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:15.064359 containerd[1558]: 2025-09-09 00:17:15.010 [INFO][4233] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:15.064359 containerd[1558]: 2025-09-09 00:17:15.010 [INFO][4233] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" host="localhost" Sep 9 00:17:15.064666 containerd[1558]: 2025-09-09 00:17:15.013 [INFO][4233] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19 Sep 9 00:17:15.064666 containerd[1558]: 2025-09-09 00:17:15.023 [INFO][4233] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" host="localhost" Sep 9 00:17:15.064666 containerd[1558]: 2025-09-09 00:17:15.031 [INFO][4233] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" host="localhost" Sep 9 00:17:15.064666 containerd[1558]: 2025-09-09 00:17:15.031 [INFO][4233] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" host="localhost" Sep 9 00:17:15.064666 containerd[1558]: 2025-09-09 00:17:15.032 [INFO][4233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:17:15.064666 containerd[1558]: 2025-09-09 00:17:15.032 [INFO][4233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" HandleID="k8s-pod-network.758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" Workload="localhost-k8s-calico--apiserver--67c4946788--5882s-eth0" Sep 9 00:17:15.064869 containerd[1558]: 2025-09-09 00:17:15.039 [INFO][4149] cni-plugin/k8s.go 418: Populated endpoint ContainerID="758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-5882s" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--5882s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c4946788--5882s-eth0", GenerateName:"calico-apiserver-67c4946788-", Namespace:"calico-apiserver", SelfLink:"", UID:"a796ff90-d29f-4851-a7cb-f79a9c1eeee0", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c4946788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67c4946788-5882s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice1f184b04b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:15.064940 containerd[1558]: 2025-09-09 00:17:15.039 [INFO][4149] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-5882s" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--5882s-eth0" Sep 9 00:17:15.064940 containerd[1558]: 2025-09-09 00:17:15.039 [INFO][4149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice1f184b04b ContainerID="758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-5882s" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--5882s-eth0" Sep 9 00:17:15.064940 containerd[1558]: 2025-09-09 00:17:15.046 [INFO][4149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-5882s" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--5882s-eth0" Sep 9 00:17:15.065068 containerd[1558]: 2025-09-09 00:17:15.046 [INFO][4149] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-5882s" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--5882s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c4946788--5882s-eth0", GenerateName:"calico-apiserver-67c4946788-", Namespace:"calico-apiserver", SelfLink:"", UID:"a796ff90-d29f-4851-a7cb-f79a9c1eeee0", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c4946788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19", Pod:"calico-apiserver-67c4946788-5882s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calice1f184b04b", MAC:"2e:33:aa:35:50:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:15.065142 containerd[1558]: 2025-09-09 00:17:15.060 [INFO][4149] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-5882s" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--5882s-eth0" Sep 9 00:17:15.067989 systemd[1]: Started cri-containerd-7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595.scope - libcontainer container 7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595. Sep 9 00:17:15.083442 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:17:15.098862 containerd[1558]: time="2025-09-09T00:17:15.098798711Z" level=info msg="connecting to shim 758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19" address="unix:///run/containerd/s/77ddc23506a410d3ed4078f88aa8015bb8643ef47d6a72509f3333be73427f13" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:17:15.131930 systemd[1]: Started cri-containerd-758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19.scope - libcontainer container 758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19. Sep 9 00:17:15.139899 containerd[1558]: time="2025-09-09T00:17:15.139231321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6468c59944-bskfc,Uid:b0f2d823-5bfa-460a-ba27-2dfc2d7f821e,Namespace:calico-system,Attempt:0,} returns sandbox id \"7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595\"" Sep 9 00:17:15.154534 systemd-networkd[1479]: cali31e7ba3ffbc: Link UP Sep 9 00:17:15.155251 systemd-networkd[1479]: cali31e7ba3ffbc: Gained carrier Sep 9 00:17:15.161779 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:17:15.306766 containerd[1558]: time="2025-09-09T00:17:15.306420536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4946788-5882s,Uid:a796ff90-d29f-4851-a7cb-f79a9c1eeee0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19\"" Sep 9 00:17:15.330469 containerd[1558]: 2025-09-09 00:17:14.876 [INFO][4164] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:17:15.330469 containerd[1558]: 2025-09-09 00:17:14.892 [INFO][4164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0 coredns-668d6bf9bc- kube-system 4e904ceb-5b62-41f7-9cc4-30a41a8aca37 814 0 2025-09-09 00:16:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4bqb6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali31e7ba3ffbc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bqb6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4bqb6-" Sep 9 00:17:15.330469 containerd[1558]: 2025-09-09 00:17:14.892 [INFO][4164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bqb6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0" Sep 9 00:17:15.330469 containerd[1558]: 2025-09-09 00:17:14.989 [INFO][4251] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" HandleID="k8s-pod-network.392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" Workload="localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0" Sep 9 00:17:15.330861 containerd[1558]: 2025-09-09 00:17:14.989 [INFO][4251] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" HandleID="k8s-pod-network.392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" Workload="localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b9a70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4bqb6", "timestamp":"2025-09-09 00:17:14.989382375 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:17:15.330861 containerd[1558]: 2025-09-09 00:17:14.989 [INFO][4251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:17:15.330861 containerd[1558]: 2025-09-09 00:17:15.032 [INFO][4251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:17:15.330861 containerd[1558]: 2025-09-09 00:17:15.033 [INFO][4251] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:17:15.330861 containerd[1558]: 2025-09-09 00:17:15.081 [INFO][4251] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" host="localhost" Sep 9 00:17:15.330861 containerd[1558]: 2025-09-09 00:17:15.093 [INFO][4251] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:17:15.330861 containerd[1558]: 2025-09-09 00:17:15.104 [INFO][4251] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:17:15.330861 containerd[1558]: 2025-09-09 00:17:15.106 [INFO][4251] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:15.330861 containerd[1558]: 2025-09-09 00:17:15.116 [INFO][4251] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:15.330861 containerd[1558]: 2025-09-09 00:17:15.116 [INFO][4251] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" host="localhost" Sep 9 00:17:15.331134 containerd[1558]: 2025-09-09 00:17:15.120 [INFO][4251] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6 Sep 9 00:17:15.331134 containerd[1558]: 2025-09-09 00:17:15.126 [INFO][4251] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" host="localhost" Sep 9 00:17:15.331134 containerd[1558]: 2025-09-09 00:17:15.136 [INFO][4251] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" host="localhost" Sep 9 00:17:15.331134 containerd[1558]: 2025-09-09 00:17:15.136 [INFO][4251] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" host="localhost" Sep 9 00:17:15.331134 containerd[1558]: 2025-09-09 00:17:15.136 [INFO][4251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:17:15.331134 containerd[1558]: 2025-09-09 00:17:15.136 [INFO][4251] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" HandleID="k8s-pod-network.392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" Workload="localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0" Sep 9 00:17:15.331267 containerd[1558]: 2025-09-09 00:17:15.151 [INFO][4164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bqb6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e904ceb-5b62-41f7-9cc4-30a41a8aca37", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4bqb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31e7ba3ffbc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:15.331343 containerd[1558]: 2025-09-09 00:17:15.152 [INFO][4164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bqb6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0" Sep 9 00:17:15.331343 containerd[1558]: 2025-09-09 00:17:15.152 [INFO][4164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31e7ba3ffbc ContainerID="392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bqb6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0" Sep 9 00:17:15.331343 containerd[1558]: 2025-09-09 00:17:15.154 [INFO][4164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bqb6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0" Sep 9 00:17:15.331410 containerd[1558]: 2025-09-09 00:17:15.157 [INFO][4164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bqb6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"4e904ceb-5b62-41f7-9cc4-30a41a8aca37", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6", Pod:"coredns-668d6bf9bc-4bqb6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali31e7ba3ffbc", MAC:"62:c4:36:0a:01:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:15.331410 containerd[1558]: 2025-09-09 00:17:15.313 [INFO][4164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" Namespace="kube-system" Pod="coredns-668d6bf9bc-4bqb6" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4bqb6-eth0" Sep 9 00:17:15.335264 kubelet[2719]: E0909 00:17:15.335144 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:15.337416 containerd[1558]: time="2025-09-09T00:17:15.337388258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-srhmj,Uid:28474118-60c7-451d-8c91-1c7c0d29a234,Namespace:kube-system,Attempt:0,}" Sep 9 00:17:15.374484 systemd-networkd[1479]: caliee145afa0d3: Link UP Sep 9 00:17:15.376285 systemd-networkd[1479]: caliee145afa0d3: Gained carrier Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:14.899 [INFO][4189] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:14.923 [INFO][4189] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67c4946788--2b724-eth0 calico-apiserver-67c4946788- calico-apiserver eb7e7c20-7975-493d-9afe-960b50789f47 815 0 2025-09-09 00:16:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67c4946788 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67c4946788-2b724 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliee145afa0d3 [] [] }} ContainerID="b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-2b724" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--2b724-" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:14.923 [INFO][4189] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-2b724" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--2b724-eth0" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.013 [INFO][4271] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" HandleID="k8s-pod-network.b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" Workload="localhost-k8s-calico--apiserver--67c4946788--2b724-eth0" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.013 [INFO][4271] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" HandleID="k8s-pod-network.b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" Workload="localhost-k8s-calico--apiserver--67c4946788--2b724-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345b00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67c4946788-2b724", "timestamp":"2025-09-09 00:17:15.01321788 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.013 [INFO][4271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.136 [INFO][4271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.136 [INFO][4271] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.181 [INFO][4271] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" host="localhost" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.319 [INFO][4271] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.324 [INFO][4271] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.328 [INFO][4271] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.332 [INFO][4271] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.332 [INFO][4271] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" host="localhost" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.334 [INFO][4271] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07 Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.347 [INFO][4271] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" host="localhost" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.357 [INFO][4271] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" host="localhost" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.357 [INFO][4271] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" host="localhost" Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.357 [INFO][4271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:17:15.401439 containerd[1558]: 2025-09-09 00:17:15.357 [INFO][4271] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" HandleID="k8s-pod-network.b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" Workload="localhost-k8s-calico--apiserver--67c4946788--2b724-eth0" Sep 9 00:17:15.404530 containerd[1558]: 2025-09-09 00:17:15.361 [INFO][4189] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-2b724" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--2b724-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c4946788--2b724-eth0", GenerateName:"calico-apiserver-67c4946788-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb7e7c20-7975-493d-9afe-960b50789f47", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c4946788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67c4946788-2b724", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee145afa0d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:15.404530 containerd[1558]: 2025-09-09 00:17:15.361 [INFO][4189] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-2b724" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--2b724-eth0" Sep 9 00:17:15.404530 containerd[1558]: 2025-09-09 00:17:15.361 [INFO][4189] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee145afa0d3 ContainerID="b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-2b724" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--2b724-eth0" Sep 9 00:17:15.404530 containerd[1558]: 2025-09-09 00:17:15.377 [INFO][4189] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-2b724" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--2b724-eth0" Sep 9 00:17:15.404530 containerd[1558]: 2025-09-09 00:17:15.378 [INFO][4189] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-2b724" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--2b724-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67c4946788--2b724-eth0", GenerateName:"calico-apiserver-67c4946788-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb7e7c20-7975-493d-9afe-960b50789f47", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67c4946788", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07", Pod:"calico-apiserver-67c4946788-2b724", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee145afa0d3", MAC:"16:09:c3:ba:72:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:15.404530 containerd[1558]: 2025-09-09 00:17:15.394 [INFO][4189] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" Namespace="calico-apiserver" Pod="calico-apiserver-67c4946788-2b724" WorkloadEndpoint="localhost-k8s-calico--apiserver--67c4946788--2b724-eth0" Sep 9 00:17:15.405776 containerd[1558]: time="2025-09-09T00:17:15.405507249Z" level=info msg="connecting to shim 392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6" address="unix:///run/containerd/s/3c0981eb3bbb1a634e175a646332a3a07044ad95026786ddbf56d34e5bb499e4" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:17:15.449930 systemd[1]: Started cri-containerd-392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6.scope - libcontainer container 392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6. Sep 9 00:17:15.485345 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:17:15.487275 containerd[1558]: time="2025-09-09T00:17:15.487212666Z" level=info msg="connecting to shim b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07" address="unix:///run/containerd/s/f086b5707237174a6505dc3dc170d5fd45d3eb38b700548f662a4edd2b2290d6" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:17:15.573045 systemd[1]: Started cri-containerd-b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07.scope - libcontainer container b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07. Sep 9 00:17:15.597237 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:17:16.011499 systemd-networkd[1479]: calif3b9c615cdc: Link UP Sep 9 00:17:16.013066 systemd-networkd[1479]: calif3b9c615cdc: Gained carrier Sep 9 00:17:16.038297 kubelet[2719]: I0909 00:17:16.038250 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:17:16.038684 kubelet[2719]: E0909 00:17:16.038659 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:14.910 [INFO][4182] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:14.931 [INFO][4182] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--jnwpv-eth0 goldmane-54d579b49d- calico-system e6cfb95f-994f-481e-9a63-7060331a5492 819 0 2025-09-09 00:16:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-jnwpv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif3b9c615cdc [] [] }} ContainerID="f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" Namespace="calico-system" Pod="goldmane-54d579b49d-jnwpv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jnwpv-" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:14.931 [INFO][4182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" Namespace="calico-system" Pod="goldmane-54d579b49d-jnwpv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jnwpv-eth0" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.018 [INFO][4277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" HandleID="k8s-pod-network.f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" Workload="localhost-k8s-goldmane--54d579b49d--jnwpv-eth0" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.019 [INFO][4277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" HandleID="k8s-pod-network.f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" Workload="localhost-k8s-goldmane--54d579b49d--jnwpv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036eb90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-jnwpv", "timestamp":"2025-09-09 00:17:15.018865962 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.019 [INFO][4277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.357 [INFO][4277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.357 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.366 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" host="localhost" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.443 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.468 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.472 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.697 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.697 [INFO][4277] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" host="localhost" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.750 [INFO][4277] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:15.760 [INFO][4277] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" host="localhost" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:16.000 [INFO][4277] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" host="localhost" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:16.001 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" host="localhost" Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:16.001 [INFO][4277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:17:16.323060 containerd[1558]: 2025-09-09 00:17:16.001 [INFO][4277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" HandleID="k8s-pod-network.f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" Workload="localhost-k8s-goldmane--54d579b49d--jnwpv-eth0" Sep 9 00:17:16.323860 containerd[1558]: 2025-09-09 00:17:16.007 [INFO][4182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" Namespace="calico-system" Pod="goldmane-54d579b49d-jnwpv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jnwpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--jnwpv-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e6cfb95f-994f-481e-9a63-7060331a5492", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-jnwpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif3b9c615cdc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:16.323860 containerd[1558]: 2025-09-09 00:17:16.008 [INFO][4182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" Namespace="calico-system" Pod="goldmane-54d579b49d-jnwpv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jnwpv-eth0" Sep 9 00:17:16.323860 containerd[1558]: 2025-09-09 00:17:16.008 [INFO][4182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3b9c615cdc ContainerID="f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" Namespace="calico-system" Pod="goldmane-54d579b49d-jnwpv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jnwpv-eth0" Sep 9 00:17:16.323860 containerd[1558]: 2025-09-09 00:17:16.014 [INFO][4182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" Namespace="calico-system" Pod="goldmane-54d579b49d-jnwpv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jnwpv-eth0" Sep 9 00:17:16.323860 containerd[1558]: 2025-09-09 00:17:16.014 [INFO][4182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" Namespace="calico-system" Pod="goldmane-54d579b49d-jnwpv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jnwpv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--jnwpv-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e6cfb95f-994f-481e-9a63-7060331a5492", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a", Pod:"goldmane-54d579b49d-jnwpv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif3b9c615cdc", MAC:"da:50:85:54:41:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:16.323860 containerd[1558]: 2025-09-09 00:17:16.320 [INFO][4182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" Namespace="calico-system" Pod="goldmane-54d579b49d-jnwpv" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jnwpv-eth0" Sep 9 00:17:16.357488 containerd[1558]: time="2025-09-09T00:17:16.357449861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4bqb6,Uid:4e904ceb-5b62-41f7-9cc4-30a41a8aca37,Namespace:kube-system,Attempt:0,} returns sandbox id \"392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6\"" Sep 9 00:17:16.358406 kubelet[2719]: E0909 00:17:16.358379 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:16.480602 containerd[1558]: time="2025-09-09T00:17:16.480561110Z" level=info msg="CreateContainer within sandbox \"392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:17:16.482296 kubelet[2719]: E0909 00:17:16.482271 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:16.488933 systemd-networkd[1479]: calice1f184b04b: Gained IPv6LL Sep 9 00:17:16.489293 systemd-networkd[1479]: cali31e7ba3ffbc: Gained IPv6LL Sep 9 00:17:16.545763 containerd[1558]: time="2025-09-09T00:17:16.545100875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67c4946788-2b724,Uid:eb7e7c20-7975-493d-9afe-960b50789f47,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07\"" Sep 9 00:17:16.568625 containerd[1558]: time="2025-09-09T00:17:16.568561212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:16.571014 containerd[1558]: time="2025-09-09T00:17:16.570955316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:17:16.597472 systemd-networkd[1479]: cali5ccddfdbbf1: Link UP Sep 9 00:17:16.600552 systemd-networkd[1479]: cali5ccddfdbbf1: Gained carrier Sep 9 00:17:16.616904 systemd-networkd[1479]: cali05cc8fe7f07: Gained IPv6LL Sep 9 00:17:16.695975 containerd[1558]: time="2025-09-09T00:17:16.695888645Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:16.769684 containerd[1558]: time="2025-09-09T00:17:16.769626376Z" level=info msg="Container 1398712fd7988f65ea7c6cd316196fadf6b7542b1069edd2b5d3ab965624e461: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:17:16.772926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount110827790.mount: Deactivated successfully. Sep 9 00:17:16.778146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2964217131.mount: Deactivated successfully. Sep 9 00:17:16.815875 containerd[1558]: time="2025-09-09T00:17:16.815510634Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:16.816031 containerd[1558]: time="2025-09-09T00:17:16.815963243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 5.09315673s" Sep 9 00:17:16.816031 containerd[1558]: time="2025-09-09T00:17:16.816006635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:17:16.819268 containerd[1558]: time="2025-09-09T00:17:16.819161627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:15.415 [INFO][4423] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:15.465 [INFO][4423] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--srhmj-eth0 coredns-668d6bf9bc- kube-system 28474118-60c7-451d-8c91-1c7c0d29a234 809 0 2025-09-09 00:16:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-srhmj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ccddfdbbf1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" Namespace="kube-system" Pod="coredns-668d6bf9bc-srhmj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--srhmj-" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:15.469 [INFO][4423] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" Namespace="kube-system" Pod="coredns-668d6bf9bc-srhmj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--srhmj-eth0" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:15.777 [INFO][4555] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" HandleID="k8s-pod-network.a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" Workload="localhost-k8s-coredns--668d6bf9bc--srhmj-eth0" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:15.778 [INFO][4555] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" HandleID="k8s-pod-network.a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" Workload="localhost-k8s-coredns--668d6bf9bc--srhmj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-srhmj", "timestamp":"2025-09-09 00:17:15.777811533 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:15.778 [INFO][4555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.001 [INFO][4555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.001 [INFO][4555] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.015 [INFO][4555] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" host="localhost" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.023 [INFO][4555] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.357 [INFO][4555] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.542 [INFO][4555] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.548 [INFO][4555] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.548 [INFO][4555] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" host="localhost" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.552 [INFO][4555] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583 Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.564 [INFO][4555] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" host="localhost" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.576 [INFO][4555] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" host="localhost" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.577 [INFO][4555] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" host="localhost" Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.577 [INFO][4555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:17:16.821889 containerd[1558]: 2025-09-09 00:17:16.577 [INFO][4555] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" HandleID="k8s-pod-network.a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" Workload="localhost-k8s-coredns--668d6bf9bc--srhmj-eth0" Sep 9 00:17:16.825524 containerd[1558]: 2025-09-09 00:17:16.584 [INFO][4423] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" Namespace="kube-system" Pod="coredns-668d6bf9bc-srhmj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--srhmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--srhmj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"28474118-60c7-451d-8c91-1c7c0d29a234", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-srhmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ccddfdbbf1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:16.825524 containerd[1558]: 2025-09-09 00:17:16.584 [INFO][4423] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" Namespace="kube-system" Pod="coredns-668d6bf9bc-srhmj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--srhmj-eth0" Sep 9 00:17:16.825524 containerd[1558]: 2025-09-09 00:17:16.584 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ccddfdbbf1 ContainerID="a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" Namespace="kube-system" Pod="coredns-668d6bf9bc-srhmj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--srhmj-eth0" Sep 9 00:17:16.825524 containerd[1558]: 2025-09-09 00:17:16.598 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" Namespace="kube-system" Pod="coredns-668d6bf9bc-srhmj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--srhmj-eth0" Sep 9 00:17:16.825524 containerd[1558]: 2025-09-09 00:17:16.600 [INFO][4423] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" Namespace="kube-system" Pod="coredns-668d6bf9bc-srhmj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--srhmj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--srhmj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"28474118-60c7-451d-8c91-1c7c0d29a234", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 16, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583", Pod:"coredns-668d6bf9bc-srhmj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ccddfdbbf1", MAC:"f2:84:97:50:c3:e7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:17:16.825524 containerd[1558]: 2025-09-09 00:17:16.800 [INFO][4423] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" Namespace="kube-system" Pod="coredns-668d6bf9bc-srhmj" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--srhmj-eth0" Sep 9 00:17:16.828789 containerd[1558]: time="2025-09-09T00:17:16.828510546Z" level=info msg="CreateContainer within sandbox \"be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:17:16.832934 containerd[1558]: time="2025-09-09T00:17:16.832871691Z" level=info msg="CreateContainer within sandbox \"392fab9999dadcb41220bba04326b0ebd81a6b8277aafeccab2c1f7f1bbd79f6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1398712fd7988f65ea7c6cd316196fadf6b7542b1069edd2b5d3ab965624e461\"" Sep 9 00:17:16.839023 containerd[1558]: time="2025-09-09T00:17:16.838967915Z" level=info msg="StartContainer for \"1398712fd7988f65ea7c6cd316196fadf6b7542b1069edd2b5d3ab965624e461\"" Sep 9 00:17:16.845144 containerd[1558]: time="2025-09-09T00:17:16.844658196Z" level=info msg="connecting to shim 1398712fd7988f65ea7c6cd316196fadf6b7542b1069edd2b5d3ab965624e461" address="unix:///run/containerd/s/3c0981eb3bbb1a634e175a646332a3a07044ad95026786ddbf56d34e5bb499e4" protocol=ttrpc version=3 Sep 9 00:17:16.853571 containerd[1558]: time="2025-09-09T00:17:16.853438598Z" level=info msg="connecting to shim f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a" address="unix:///run/containerd/s/d7088096c984d83a8c2f3ec9b42ca13b6b4b97d74d9086e38314b22b004a5c96" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:17:16.895954 containerd[1558]: time="2025-09-09T00:17:16.895899971Z" level=info msg="Container e3005ef3bb27182933e3723cbe7372a4dd49ef3fb3f253b92112b1330d500ce8: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:17:16.905094 systemd[1]: Started cri-containerd-1398712fd7988f65ea7c6cd316196fadf6b7542b1069edd2b5d3ab965624e461.scope - libcontainer container 1398712fd7988f65ea7c6cd316196fadf6b7542b1069edd2b5d3ab965624e461. Sep 9 00:17:16.917248 containerd[1558]: time="2025-09-09T00:17:16.917194562Z" level=info msg="CreateContainer within sandbox \"be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e3005ef3bb27182933e3723cbe7372a4dd49ef3fb3f253b92112b1330d500ce8\"" Sep 9 00:17:16.919108 containerd[1558]: time="2025-09-09T00:17:16.919056607Z" level=info msg="StartContainer for \"e3005ef3bb27182933e3723cbe7372a4dd49ef3fb3f253b92112b1330d500ce8\"" Sep 9 00:17:16.922905 systemd[1]: Started cri-containerd-f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a.scope - libcontainer container f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a. Sep 9 00:17:16.924020 containerd[1558]: time="2025-09-09T00:17:16.923180428Z" level=info msg="connecting to shim e3005ef3bb27182933e3723cbe7372a4dd49ef3fb3f253b92112b1330d500ce8" address="unix:///run/containerd/s/95ef2ba0b0361566330a2badd3ca378015da076fed885a6a7483c81f5e13544f" protocol=ttrpc version=3 Sep 9 00:17:16.950017 containerd[1558]: time="2025-09-09T00:17:16.949880746Z" level=info msg="connecting to shim a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583" address="unix:///run/containerd/s/f7c90487f10c793d6c16dd541ffd6ebe44b2c81e4fe722912c04b99cd8534e41" namespace=k8s.io protocol=ttrpc version=3 Sep 9 00:17:16.969003 systemd[1]: Started cri-containerd-e3005ef3bb27182933e3723cbe7372a4dd49ef3fb3f253b92112b1330d500ce8.scope - libcontainer container e3005ef3bb27182933e3723cbe7372a4dd49ef3fb3f253b92112b1330d500ce8. Sep 9 00:17:17.004436 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:17:17.022156 systemd[1]: Started cri-containerd-a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583.scope - libcontainer container a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583. Sep 9 00:17:17.058387 systemd-resolved[1405]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:17:17.065553 containerd[1558]: time="2025-09-09T00:17:17.065505796Z" level=info msg="StartContainer for \"1398712fd7988f65ea7c6cd316196fadf6b7542b1069edd2b5d3ab965624e461\" returns successfully" Sep 9 00:17:17.118608 containerd[1558]: time="2025-09-09T00:17:17.118451733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-srhmj,Uid:28474118-60c7-451d-8c91-1c7c0d29a234,Namespace:kube-system,Attempt:0,} returns sandbox id \"a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583\"" Sep 9 00:17:17.119959 kubelet[2719]: E0909 00:17:17.119935 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:17.122999 containerd[1558]: time="2025-09-09T00:17:17.122874705Z" level=info msg="CreateContainer within sandbox \"a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:17:17.128718 systemd[1]: Started sshd@9-10.0.0.42:22-10.0.0.1:51202.service - OpenSSH per-connection server daemon (10.0.0.1:51202). Sep 9 00:17:17.141273 containerd[1558]: time="2025-09-09T00:17:17.141174513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jnwpv,Uid:e6cfb95f-994f-481e-9a63-7060331a5492,Namespace:calico-system,Attempt:0,} returns sandbox id \"f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a\"" Sep 9 00:17:17.150757 containerd[1558]: time="2025-09-09T00:17:17.149491133Z" level=info msg="StartContainer for \"e3005ef3bb27182933e3723cbe7372a4dd49ef3fb3f253b92112b1330d500ce8\" returns successfully" Sep 9 00:17:17.164505 containerd[1558]: time="2025-09-09T00:17:17.164463926Z" level=info msg="Container 8142ccc6a9d4399973155c9c8a21fdb49b26471a86dd75d191a25e6c404d4fde: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:17:17.174473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910448580.mount: Deactivated successfully. Sep 9 00:17:17.190950 containerd[1558]: time="2025-09-09T00:17:17.189930015Z" level=info msg="CreateContainer within sandbox \"a191d71f7523d75134df53603c9a655be07f55d56c536b5a5bff87c08a19c583\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8142ccc6a9d4399973155c9c8a21fdb49b26471a86dd75d191a25e6c404d4fde\"" Sep 9 00:17:17.192123 containerd[1558]: time="2025-09-09T00:17:17.192021291Z" level=info msg="StartContainer for \"8142ccc6a9d4399973155c9c8a21fdb49b26471a86dd75d191a25e6c404d4fde\"" Sep 9 00:17:17.194891 containerd[1558]: time="2025-09-09T00:17:17.194851553Z" level=info msg="connecting to shim 8142ccc6a9d4399973155c9c8a21fdb49b26471a86dd75d191a25e6c404d4fde" address="unix:///run/containerd/s/f7c90487f10c793d6c16dd541ffd6ebe44b2c81e4fe722912c04b99cd8534e41" protocol=ttrpc version=3 Sep 9 00:17:17.231417 systemd[1]: Started cri-containerd-8142ccc6a9d4399973155c9c8a21fdb49b26471a86dd75d191a25e6c404d4fde.scope - libcontainer container 8142ccc6a9d4399973155c9c8a21fdb49b26471a86dd75d191a25e6c404d4fde. Sep 9 00:17:17.235906 sshd[4770]: Accepted publickey for core from 10.0.0.1 port 51202 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:17:17.238432 sshd-session[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:17.247491 systemd-logind[1534]: New session 10 of user core. Sep 9 00:17:17.261315 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:17:17.302841 containerd[1558]: time="2025-09-09T00:17:17.302789288Z" level=info msg="StartContainer for \"8142ccc6a9d4399973155c9c8a21fdb49b26471a86dd75d191a25e6c404d4fde\" returns successfully" Sep 9 00:17:17.322846 systemd-networkd[1479]: caliee145afa0d3: Gained IPv6LL Sep 9 00:17:17.448906 systemd-networkd[1479]: calif3b9c615cdc: Gained IPv6LL Sep 9 00:17:17.474651 sshd[4803]: Connection closed by 10.0.0.1 port 51202 Sep 9 00:17:17.476953 sshd-session[4770]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:17.482275 systemd[1]: sshd@9-10.0.0.42:22-10.0.0.1:51202.service: Deactivated successfully. Sep 9 00:17:17.485694 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:17:17.488438 systemd-logind[1534]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:17:17.490843 kubelet[2719]: E0909 00:17:17.490780 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:17.493420 systemd-logind[1534]: Removed session 10. Sep 9 00:17:17.496250 kubelet[2719]: E0909 00:17:17.496216 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:17.508225 kubelet[2719]: I0909 00:17:17.508162 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4bqb6" podStartSLOduration=43.508147649 podStartE2EDuration="43.508147649s" podCreationTimestamp="2025-09-09 00:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:17:17.507882 +0000 UTC m=+48.310866111" watchObservedRunningTime="2025-09-09 00:17:17.508147649 +0000 UTC m=+48.311131750" Sep 9 00:17:17.801390 systemd-networkd[1479]: vxlan.calico: Link UP Sep 9 00:17:17.801621 systemd-networkd[1479]: vxlan.calico: Gained carrier Sep 9 00:17:17.961043 systemd-networkd[1479]: cali5ccddfdbbf1: Gained IPv6LL Sep 9 00:17:18.501960 kubelet[2719]: E0909 00:17:18.501885 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:18.502549 kubelet[2719]: E0909 00:17:18.502249 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:18.545147 containerd[1558]: time="2025-09-09T00:17:18.545084817Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:18.545895 containerd[1558]: time="2025-09-09T00:17:18.545866033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 00:17:18.547080 containerd[1558]: time="2025-09-09T00:17:18.547044375Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:18.549604 containerd[1558]: time="2025-09-09T00:17:18.549569585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:18.550444 containerd[1558]: time="2025-09-09T00:17:18.550410523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.731200555s" Sep 9 00:17:18.550480 containerd[1558]: time="2025-09-09T00:17:18.550446360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 00:17:18.554276 containerd[1558]: time="2025-09-09T00:17:18.554256441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:17:18.555949 containerd[1558]: time="2025-09-09T00:17:18.555477433Z" level=info msg="CreateContainer within sandbox \"beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:17:18.565436 containerd[1558]: time="2025-09-09T00:17:18.565390368Z" level=info msg="Container 57f29a6c8a02b2cf64d20b2d6d4e8f4b972de783964bd860e6d9f895f2e4b437: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:17:18.578898 containerd[1558]: time="2025-09-09T00:17:18.578846583Z" level=info msg="CreateContainer within sandbox \"beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"57f29a6c8a02b2cf64d20b2d6d4e8f4b972de783964bd860e6d9f895f2e4b437\"" Sep 9 00:17:18.579595 containerd[1558]: time="2025-09-09T00:17:18.579471536Z" level=info msg="StartContainer for \"57f29a6c8a02b2cf64d20b2d6d4e8f4b972de783964bd860e6d9f895f2e4b437\"" Sep 9 00:17:18.581068 containerd[1558]: time="2025-09-09T00:17:18.581008931Z" level=info msg="connecting to shim 57f29a6c8a02b2cf64d20b2d6d4e8f4b972de783964bd860e6d9f895f2e4b437" address="unix:///run/containerd/s/00751fefa53b0519643b1dd3a38c0a25a1b824317cdb9e6207ce6d88712810c2" protocol=ttrpc version=3 Sep 9 00:17:18.608024 systemd[1]: Started cri-containerd-57f29a6c8a02b2cf64d20b2d6d4e8f4b972de783964bd860e6d9f895f2e4b437.scope - libcontainer container 57f29a6c8a02b2cf64d20b2d6d4e8f4b972de783964bd860e6d9f895f2e4b437. Sep 9 00:17:18.718664 containerd[1558]: time="2025-09-09T00:17:18.718618266Z" level=info msg="StartContainer for \"57f29a6c8a02b2cf64d20b2d6d4e8f4b972de783964bd860e6d9f895f2e4b437\" returns successfully" Sep 9 00:17:19.506471 kubelet[2719]: E0909 00:17:19.506430 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:19.507228 kubelet[2719]: E0909 00:17:19.506676 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:19.624971 systemd-networkd[1479]: vxlan.calico: Gained IPv6LL Sep 9 00:17:20.571722 containerd[1558]: time="2025-09-09T00:17:20.571667467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:20.572522 containerd[1558]: time="2025-09-09T00:17:20.572489881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 00:17:20.574107 containerd[1558]: time="2025-09-09T00:17:20.574075026Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:20.576330 containerd[1558]: time="2025-09-09T00:17:20.576291906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:20.576844 containerd[1558]: time="2025-09-09T00:17:20.576819637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.022540353s" Sep 9 00:17:20.576899 containerd[1558]: time="2025-09-09T00:17:20.576849072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 00:17:20.580598 containerd[1558]: time="2025-09-09T00:17:20.580553404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:17:20.597232 containerd[1558]: time="2025-09-09T00:17:20.597174917Z" level=info msg="CreateContainer within sandbox \"7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:17:20.607541 containerd[1558]: time="2025-09-09T00:17:20.607489323Z" level=info msg="Container a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:17:20.623945 containerd[1558]: time="2025-09-09T00:17:20.623895072Z" level=info msg="CreateContainer within sandbox \"7dcb7f149da1c42dd86439c263f022f5c977ae49728f7da23b32c06a7f7c4595\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b\"" Sep 9 00:17:20.624518 containerd[1558]: time="2025-09-09T00:17:20.624475221Z" level=info msg="StartContainer for \"a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b\"" Sep 9 00:17:20.627516 containerd[1558]: time="2025-09-09T00:17:20.627478196Z" level=info msg="connecting to shim a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b" address="unix:///run/containerd/s/ab5eae72c39428380a8cbb0ba8a5db6cbc013fc946c8a4a7ea18d8100ea0ae8a" protocol=ttrpc version=3 Sep 9 00:17:20.666087 systemd[1]: Started cri-containerd-a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b.scope - libcontainer container a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b. Sep 9 00:17:20.738143 containerd[1558]: time="2025-09-09T00:17:20.738021441Z" level=info msg="StartContainer for \"a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b\" returns successfully" Sep 9 00:17:21.548217 kubelet[2719]: I0909 00:17:21.548132 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-srhmj" podStartSLOduration=47.548109993 podStartE2EDuration="47.548109993s" podCreationTimestamp="2025-09-09 00:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:17:17.535049763 +0000 UTC m=+48.338033874" watchObservedRunningTime="2025-09-09 00:17:21.548109993 +0000 UTC m=+52.351094125" Sep 9 00:17:21.548878 kubelet[2719]: I0909 00:17:21.548254 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6468c59944-bskfc" podStartSLOduration=28.1112895 podStartE2EDuration="33.548247982s" podCreationTimestamp="2025-09-09 00:16:48 +0000 UTC" firstStartedPulling="2025-09-09 00:17:15.142648004 +0000 UTC m=+45.945632115" lastFinishedPulling="2025-09-09 00:17:20.579606496 +0000 UTC m=+51.382590597" observedRunningTime="2025-09-09 00:17:21.546056711 +0000 UTC m=+52.349040822" watchObservedRunningTime="2025-09-09 00:17:21.548247982 +0000 UTC m=+52.351232113" Sep 9 00:17:21.589445 containerd[1558]: time="2025-09-09T00:17:21.589205764Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b\" id:\"496a9d2e70183145cc9b58b9d90f480d303f1c3d2d55c6cec8a6e792c3f13977\" pid:5035 exited_at:{seconds:1757377041 nanos:588001855}" Sep 9 00:17:22.490881 systemd[1]: Started sshd@10-10.0.0.42:22-10.0.0.1:41440.service - OpenSSH per-connection server daemon (10.0.0.1:41440). Sep 9 00:17:22.659919 sshd[5046]: Accepted publickey for core from 10.0.0.1 port 41440 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:17:22.661786 sshd-session[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:22.667870 systemd-logind[1534]: New session 11 of user core. Sep 9 00:17:22.677915 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:17:22.809678 sshd[5054]: Connection closed by 10.0.0.1 port 41440 Sep 9 00:17:22.811906 sshd-session[5046]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:22.816305 systemd[1]: sshd@10-10.0.0.42:22-10.0.0.1:41440.service: Deactivated successfully. Sep 9 00:17:22.818999 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:17:22.819772 systemd-logind[1534]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:17:22.821053 systemd-logind[1534]: Removed session 11. Sep 9 00:17:23.817688 containerd[1558]: time="2025-09-09T00:17:23.817615544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:23.828749 containerd[1558]: time="2025-09-09T00:17:23.828702638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 00:17:23.868548 containerd[1558]: time="2025-09-09T00:17:23.868485739Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:23.884258 containerd[1558]: time="2025-09-09T00:17:23.884216357Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:23.889487 containerd[1558]: time="2025-09-09T00:17:23.889405374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.308816814s" Sep 9 00:17:23.889881 containerd[1558]: time="2025-09-09T00:17:23.889848405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:17:23.891222 containerd[1558]: time="2025-09-09T00:17:23.890991740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:17:23.891901 containerd[1558]: time="2025-09-09T00:17:23.891870208Z" level=info msg="CreateContainer within sandbox \"758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:17:23.962813 containerd[1558]: time="2025-09-09T00:17:23.962719223Z" level=info msg="Container 5d562daa203e10a636cdd5fea8b518355c0abbabd7ad80ef9d242059dc01e22c: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:17:23.988680 containerd[1558]: time="2025-09-09T00:17:23.988609294Z" level=info msg="CreateContainer within sandbox \"758e0ae8b8ef3e9df0baa97350efc3df886f1953d34a682b9dc758b768741c19\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5d562daa203e10a636cdd5fea8b518355c0abbabd7ad80ef9d242059dc01e22c\"" Sep 9 00:17:23.989355 containerd[1558]: time="2025-09-09T00:17:23.989301904Z" level=info msg="StartContainer for \"5d562daa203e10a636cdd5fea8b518355c0abbabd7ad80ef9d242059dc01e22c\"" Sep 9 00:17:23.990662 containerd[1558]: time="2025-09-09T00:17:23.990634484Z" level=info msg="connecting to shim 5d562daa203e10a636cdd5fea8b518355c0abbabd7ad80ef9d242059dc01e22c" address="unix:///run/containerd/s/77ddc23506a410d3ed4078f88aa8015bb8643ef47d6a72509f3333be73427f13" protocol=ttrpc version=3 Sep 9 00:17:24.023921 systemd[1]: Started cri-containerd-5d562daa203e10a636cdd5fea8b518355c0abbabd7ad80ef9d242059dc01e22c.scope - libcontainer container 5d562daa203e10a636cdd5fea8b518355c0abbabd7ad80ef9d242059dc01e22c. Sep 9 00:17:24.083018 containerd[1558]: time="2025-09-09T00:17:24.082593916Z" level=info msg="StartContainer for \"5d562daa203e10a636cdd5fea8b518355c0abbabd7ad80ef9d242059dc01e22c\" returns successfully" Sep 9 00:17:24.345088 containerd[1558]: time="2025-09-09T00:17:24.344927834Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:24.346201 containerd[1558]: time="2025-09-09T00:17:24.346159465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:17:24.348128 containerd[1558]: time="2025-09-09T00:17:24.348101389Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 457.070826ms" Sep 9 00:17:24.348176 containerd[1558]: time="2025-09-09T00:17:24.348132487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:17:24.349964 containerd[1558]: time="2025-09-09T00:17:24.349873534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:17:24.352555 containerd[1558]: time="2025-09-09T00:17:24.352503028Z" level=info msg="CreateContainer within sandbox \"b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:17:24.394792 containerd[1558]: time="2025-09-09T00:17:24.394003679Z" level=info msg="Container a900398fc90aa3baa0d755f4cc3ca173150489aa730aa2ea319472baf67ce097: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:17:24.410256 containerd[1558]: time="2025-09-09T00:17:24.410187627Z" level=info msg="CreateContainer within sandbox \"b5458cd35df8e4a4f8d9f629fc4f40744f32c66a8b9d8e1ed63278f4d67c7c07\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a900398fc90aa3baa0d755f4cc3ca173150489aa730aa2ea319472baf67ce097\"" Sep 9 00:17:24.410912 containerd[1558]: time="2025-09-09T00:17:24.410885446Z" level=info msg="StartContainer for \"a900398fc90aa3baa0d755f4cc3ca173150489aa730aa2ea319472baf67ce097\"" Sep 9 00:17:24.411959 containerd[1558]: time="2025-09-09T00:17:24.411923845Z" level=info msg="connecting to shim a900398fc90aa3baa0d755f4cc3ca173150489aa730aa2ea319472baf67ce097" address="unix:///run/containerd/s/f086b5707237174a6505dc3dc170d5fd45d3eb38b700548f662a4edd2b2290d6" protocol=ttrpc version=3 Sep 9 00:17:24.440908 systemd[1]: Started cri-containerd-a900398fc90aa3baa0d755f4cc3ca173150489aa730aa2ea319472baf67ce097.scope - libcontainer container a900398fc90aa3baa0d755f4cc3ca173150489aa730aa2ea319472baf67ce097. Sep 9 00:17:24.505906 containerd[1558]: time="2025-09-09T00:17:24.505843633Z" level=info msg="StartContainer for \"a900398fc90aa3baa0d755f4cc3ca173150489aa730aa2ea319472baf67ce097\" returns successfully" Sep 9 00:17:24.582841 kubelet[2719]: I0909 00:17:24.582715 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67c4946788-2b724" podStartSLOduration=30.78174932 podStartE2EDuration="38.582688845s" podCreationTimestamp="2025-09-09 00:16:46 +0000 UTC" firstStartedPulling="2025-09-09 00:17:16.548690583 +0000 UTC m=+47.351674694" lastFinishedPulling="2025-09-09 00:17:24.349630108 +0000 UTC m=+55.152614219" observedRunningTime="2025-09-09 00:17:24.580436629 +0000 UTC m=+55.383420750" watchObservedRunningTime="2025-09-09 00:17:24.582688845 +0000 UTC m=+55.385672956" Sep 9 00:17:24.599156 kubelet[2719]: I0909 00:17:24.598984 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67c4946788-5882s" podStartSLOduration=30.019507127 podStartE2EDuration="38.598963882s" podCreationTimestamp="2025-09-09 00:16:46 +0000 UTC" firstStartedPulling="2025-09-09 00:17:15.31114327 +0000 UTC m=+46.114127381" lastFinishedPulling="2025-09-09 00:17:23.890600025 +0000 UTC m=+54.693584136" observedRunningTime="2025-09-09 00:17:24.598486567 +0000 UTC m=+55.401470678" watchObservedRunningTime="2025-09-09 00:17:24.598963882 +0000 UTC m=+55.401947993" Sep 9 00:17:25.558244 kubelet[2719]: I0909 00:17:25.558192 2719 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:17:27.841903 systemd[1]: Started sshd@11-10.0.0.42:22-10.0.0.1:41444.service - OpenSSH per-connection server daemon (10.0.0.1:41444). Sep 9 00:17:27.896227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1759260676.mount: Deactivated successfully. Sep 9 00:17:27.938447 sshd[5168]: Accepted publickey for core from 10.0.0.1 port 41444 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:17:27.940349 sshd-session[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:27.945803 systemd-logind[1534]: New session 12 of user core. Sep 9 00:17:27.954922 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:17:28.134943 sshd[5170]: Connection closed by 10.0.0.1 port 41444 Sep 9 00:17:28.135220 sshd-session[5168]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:28.139572 systemd[1]: sshd@11-10.0.0.42:22-10.0.0.1:41444.service: Deactivated successfully. Sep 9 00:17:28.141887 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:17:28.143130 systemd-logind[1534]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:17:28.144856 systemd-logind[1534]: Removed session 12. Sep 9 00:17:29.429303 containerd[1558]: time="2025-09-09T00:17:29.429246026Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:29.430766 containerd[1558]: time="2025-09-09T00:17:29.430643215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 00:17:29.438829 containerd[1558]: time="2025-09-09T00:17:29.438670280Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:29.447429 containerd[1558]: time="2025-09-09T00:17:29.446589897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:29.447690 containerd[1558]: time="2025-09-09T00:17:29.447632007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.09770902s" Sep 9 00:17:29.451726 containerd[1558]: time="2025-09-09T00:17:29.447702705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 00:17:29.451726 containerd[1558]: time="2025-09-09T00:17:29.450187983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:17:29.452033 containerd[1558]: time="2025-09-09T00:17:29.451972013Z" level=info msg="CreateContainer within sandbox \"f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:17:29.478186 containerd[1558]: time="2025-09-09T00:17:29.477057104Z" level=info msg="Container 1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:17:29.496264 containerd[1558]: time="2025-09-09T00:17:29.496208790Z" level=info msg="CreateContainer within sandbox \"f2b764c937e051d4dd88f47d763cd5e8a75dade00247720e76109e63aaf0350a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3\"" Sep 9 00:17:29.497839 containerd[1558]: time="2025-09-09T00:17:29.496887336Z" level=info msg="StartContainer for \"1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3\"" Sep 9 00:17:29.498657 containerd[1558]: time="2025-09-09T00:17:29.498616859Z" level=info msg="connecting to shim 1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3" address="unix:///run/containerd/s/d7088096c984d83a8c2f3ec9b42ca13b6b4b97d74d9086e38314b22b004a5c96" protocol=ttrpc version=3 Sep 9 00:17:29.571060 systemd[1]: Started cri-containerd-1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3.scope - libcontainer container 1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3. Sep 9 00:17:29.699387 containerd[1558]: time="2025-09-09T00:17:29.699230177Z" level=info msg="StartContainer for \"1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3\" returns successfully" Sep 9 00:17:30.693562 containerd[1558]: time="2025-09-09T00:17:30.693503788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3\" id:\"ae192dacf8e5ef4afa3bf4806c1d69facbac62f51ee309ba799fd8c98d683988\" pid:5250 exit_status:1 exited_at:{seconds:1757377050 nanos:693084085}" Sep 9 00:17:30.802164 kubelet[2719]: I0909 00:17:30.802077 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-jnwpv" podStartSLOduration=30.497154077 podStartE2EDuration="42.800606466s" podCreationTimestamp="2025-09-09 00:16:48 +0000 UTC" firstStartedPulling="2025-09-09 00:17:17.145862301 +0000 UTC m=+47.948846412" lastFinishedPulling="2025-09-09 00:17:29.44931467 +0000 UTC m=+60.252298801" observedRunningTime="2025-09-09 00:17:30.80017479 +0000 UTC m=+61.603158921" watchObservedRunningTime="2025-09-09 00:17:30.800606466 +0000 UTC m=+61.603590577" Sep 9 00:17:31.485002 containerd[1558]: time="2025-09-09T00:17:31.484948197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:31.485752 containerd[1558]: time="2025-09-09T00:17:31.485707307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:17:31.486996 containerd[1558]: time="2025-09-09T00:17:31.486939501Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:31.489076 containerd[1558]: time="2025-09-09T00:17:31.489043374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:31.489663 containerd[1558]: time="2025-09-09T00:17:31.489630100Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.039420063s" Sep 9 00:17:31.489706 containerd[1558]: time="2025-09-09T00:17:31.489665729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:17:31.490771 containerd[1558]: time="2025-09-09T00:17:31.490635365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:17:31.491914 containerd[1558]: time="2025-09-09T00:17:31.491884353Z" level=info msg="CreateContainer within sandbox \"be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:17:31.503469 containerd[1558]: time="2025-09-09T00:17:31.503404561Z" level=info msg="Container 07e6ae183aa29dd7f05fc4b96555843acc14db0b06a096ce250664caf82a8ab2: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:17:31.517267 containerd[1558]: time="2025-09-09T00:17:31.517217777Z" level=info msg="CreateContainer within sandbox \"be66d8c8034edb40fb8c0abb86b397ef2316071c6ed814e9a3402832a3639f2d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"07e6ae183aa29dd7f05fc4b96555843acc14db0b06a096ce250664caf82a8ab2\"" Sep 9 00:17:31.519754 containerd[1558]: time="2025-09-09T00:17:31.517817408Z" level=info msg="StartContainer for \"07e6ae183aa29dd7f05fc4b96555843acc14db0b06a096ce250664caf82a8ab2\"" Sep 9 00:17:31.519754 containerd[1558]: time="2025-09-09T00:17:31.519220674Z" level=info msg="connecting to shim 07e6ae183aa29dd7f05fc4b96555843acc14db0b06a096ce250664caf82a8ab2" address="unix:///run/containerd/s/95ef2ba0b0361566330a2badd3ca378015da076fed885a6a7483c81f5e13544f" protocol=ttrpc version=3 Sep 9 00:17:31.546007 systemd[1]: Started cri-containerd-07e6ae183aa29dd7f05fc4b96555843acc14db0b06a096ce250664caf82a8ab2.scope - libcontainer container 07e6ae183aa29dd7f05fc4b96555843acc14db0b06a096ce250664caf82a8ab2. Sep 9 00:17:31.602294 containerd[1558]: time="2025-09-09T00:17:31.600711406Z" level=info msg="StartContainer for \"07e6ae183aa29dd7f05fc4b96555843acc14db0b06a096ce250664caf82a8ab2\" returns successfully" Sep 9 00:17:31.706904 containerd[1558]: time="2025-09-09T00:17:31.706841680Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3\" id:\"507ac461f1ad3311371e9c289bd2ae67ba559e633808ef85dae3b1731c4d5e60\" pid:5308 exited_at:{seconds:1757377051 nanos:706037745}" Sep 9 00:17:32.405750 kubelet[2719]: I0909 00:17:32.405696 2719 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:17:32.406227 kubelet[2719]: I0909 00:17:32.405775 2719 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:17:32.618689 kubelet[2719]: I0909 00:17:32.618615 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6gwrd" podStartSLOduration=24.850158316 podStartE2EDuration="44.618589757s" podCreationTimestamp="2025-09-09 00:16:48 +0000 UTC" firstStartedPulling="2025-09-09 00:17:11.721993436 +0000 UTC m=+42.524977547" lastFinishedPulling="2025-09-09 00:17:31.490424877 +0000 UTC m=+62.293408988" observedRunningTime="2025-09-09 00:17:32.61706051 +0000 UTC m=+63.420044621" watchObservedRunningTime="2025-09-09 00:17:32.618589757 +0000 UTC m=+63.421573858" Sep 9 00:17:33.157182 systemd[1]: Started sshd@12-10.0.0.42:22-10.0.0.1:49674.service - OpenSSH per-connection server daemon (10.0.0.1:49674). Sep 9 00:17:33.249494 sshd[5327]: Accepted publickey for core from 10.0.0.1 port 49674 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:17:33.251604 sshd-session[5327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:33.258236 systemd-logind[1534]: New session 13 of user core. Sep 9 00:17:33.263974 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:17:33.448231 sshd[5333]: Connection closed by 10.0.0.1 port 49674 Sep 9 00:17:33.448814 sshd-session[5327]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:33.461463 systemd[1]: sshd@12-10.0.0.42:22-10.0.0.1:49674.service: Deactivated successfully. Sep 9 00:17:33.464861 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:17:33.467805 systemd-logind[1534]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:17:33.472434 systemd[1]: Started sshd@13-10.0.0.42:22-10.0.0.1:49678.service - OpenSSH per-connection server daemon (10.0.0.1:49678). Sep 9 00:17:33.474307 systemd-logind[1534]: Removed session 13. Sep 9 00:17:33.531445 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 49678 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:17:33.536471 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:33.548379 systemd-logind[1534]: New session 14 of user core. Sep 9 00:17:33.554957 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:17:33.639983 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2857821315.mount: Deactivated successfully. Sep 9 00:17:33.707557 containerd[1558]: time="2025-09-09T00:17:33.707365290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:33.708720 containerd[1558]: time="2025-09-09T00:17:33.708679319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 00:17:33.710116 containerd[1558]: time="2025-09-09T00:17:33.710071290Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:33.713187 containerd[1558]: time="2025-09-09T00:17:33.713090884Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:17:33.713920 containerd[1558]: time="2025-09-09T00:17:33.713630898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.222956237s" Sep 9 00:17:33.713920 containerd[1558]: time="2025-09-09T00:17:33.713668080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 00:17:33.716367 containerd[1558]: time="2025-09-09T00:17:33.716317179Z" level=info msg="CreateContainer within sandbox \"beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:17:33.728766 containerd[1558]: time="2025-09-09T00:17:33.727958794Z" level=info msg="Container 396a11a5f342b9a5694f5f87f26832bba64a6e20c3b257fd8eb2f4b2025b08a1: CDI devices from CRI Config.CDIDevices: []" Sep 9 00:17:33.740066 containerd[1558]: time="2025-09-09T00:17:33.740003319Z" level=info msg="CreateContainer within sandbox \"beaa27ca9a8ce6a39b5d149fa0b19cea1c9d5cbde379421150a5e9d854bb5e7a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"396a11a5f342b9a5694f5f87f26832bba64a6e20c3b257fd8eb2f4b2025b08a1\"" Sep 9 00:17:33.741720 containerd[1558]: time="2025-09-09T00:17:33.741061794Z" level=info msg="StartContainer for \"396a11a5f342b9a5694f5f87f26832bba64a6e20c3b257fd8eb2f4b2025b08a1\"" Sep 9 00:17:33.743337 containerd[1558]: time="2025-09-09T00:17:33.742835863Z" level=info msg="connecting to shim 396a11a5f342b9a5694f5f87f26832bba64a6e20c3b257fd8eb2f4b2025b08a1" address="unix:///run/containerd/s/00751fefa53b0519643b1dd3a38c0a25a1b824317cdb9e6207ce6d88712810c2" protocol=ttrpc version=3 Sep 9 00:17:33.769955 systemd[1]: Started cri-containerd-396a11a5f342b9a5694f5f87f26832bba64a6e20c3b257fd8eb2f4b2025b08a1.scope - libcontainer container 396a11a5f342b9a5694f5f87f26832bba64a6e20c3b257fd8eb2f4b2025b08a1. Sep 9 00:17:33.878562 containerd[1558]: time="2025-09-09T00:17:33.876983080Z" level=info msg="StartContainer for \"396a11a5f342b9a5694f5f87f26832bba64a6e20c3b257fd8eb2f4b2025b08a1\" returns successfully" Sep 9 00:17:34.123530 sshd[5350]: Connection closed by 10.0.0.1 port 49678 Sep 9 00:17:34.124850 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:34.134849 systemd[1]: sshd@13-10.0.0.42:22-10.0.0.1:49678.service: Deactivated successfully. Sep 9 00:17:34.137045 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:17:34.138033 systemd-logind[1534]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:17:34.142750 systemd[1]: Started sshd@14-10.0.0.42:22-10.0.0.1:49686.service - OpenSSH per-connection server daemon (10.0.0.1:49686). Sep 9 00:17:34.143549 systemd-logind[1534]: Removed session 14. Sep 9 00:17:34.223265 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 49686 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:17:34.225077 sshd-session[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:34.229962 systemd-logind[1534]: New session 15 of user core. Sep 9 00:17:34.239880 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:17:34.399017 sshd[5401]: Connection closed by 10.0.0.1 port 49686 Sep 9 00:17:34.400204 sshd-session[5398]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:34.406370 systemd[1]: sshd@14-10.0.0.42:22-10.0.0.1:49686.service: Deactivated successfully. Sep 9 00:17:34.408967 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:17:34.411564 systemd-logind[1534]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:17:34.413764 systemd-logind[1534]: Removed session 15. Sep 9 00:17:34.627830 kubelet[2719]: I0909 00:17:34.626680 2719 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7b98875998-zd8k2" podStartSLOduration=4.926758622 podStartE2EDuration="23.62665397s" podCreationTimestamp="2025-09-09 00:17:11 +0000 UTC" firstStartedPulling="2025-09-09 00:17:15.014664235 +0000 UTC m=+45.817648346" lastFinishedPulling="2025-09-09 00:17:33.714559582 +0000 UTC m=+64.517543694" observedRunningTime="2025-09-09 00:17:34.62627044 +0000 UTC m=+65.429254581" watchObservedRunningTime="2025-09-09 00:17:34.62665397 +0000 UTC m=+65.429638081" Sep 9 00:17:38.429878 containerd[1558]: time="2025-09-09T00:17:38.429799737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b\" id:\"d06bce37b0949403ec7272ce5d885346472146e051fac09b2e6ec8f37cfb08ed\" pid:5437 exited_at:{seconds:1757377058 nanos:429323470}" Sep 9 00:17:39.414571 systemd[1]: Started sshd@15-10.0.0.42:22-10.0.0.1:49696.service - OpenSSH per-connection server daemon (10.0.0.1:49696). Sep 9 00:17:39.463944 sshd[5452]: Accepted publickey for core from 10.0.0.1 port 49696 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:17:39.465357 sshd-session[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:39.470133 systemd-logind[1534]: New session 16 of user core. Sep 9 00:17:39.484889 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:17:39.606634 sshd[5454]: Connection closed by 10.0.0.1 port 49696 Sep 9 00:17:39.607000 sshd-session[5452]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:39.611236 systemd[1]: sshd@15-10.0.0.42:22-10.0.0.1:49696.service: Deactivated successfully. Sep 9 00:17:39.613713 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:17:39.614628 systemd-logind[1534]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:17:39.616656 systemd-logind[1534]: Removed session 16. Sep 9 00:17:41.643113 containerd[1558]: time="2025-09-09T00:17:41.643044534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd53949aede677d5748eb161b441c726670ddbde9d0b0c8d3bd1bfabaeafdc73\" id:\"026c0d456fe06158f6ba4ca13ea9d252ab3f43dcdcfa8f51e852e572c298373d\" pid:5480 exit_status:1 exited_at:{seconds:1757377061 nanos:626597816}" Sep 9 00:17:44.630768 systemd[1]: Started sshd@16-10.0.0.42:22-10.0.0.1:54396.service - OpenSSH per-connection server daemon (10.0.0.1:54396). Sep 9 00:17:44.707983 sshd[5494]: Accepted publickey for core from 10.0.0.1 port 54396 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:17:44.709945 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:44.715155 systemd-logind[1534]: New session 17 of user core. Sep 9 00:17:44.723939 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:17:44.860130 sshd[5496]: Connection closed by 10.0.0.1 port 54396 Sep 9 00:17:44.860488 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:44.865269 systemd[1]: sshd@16-10.0.0.42:22-10.0.0.1:54396.service: Deactivated successfully. Sep 9 00:17:44.867692 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:17:44.868785 systemd-logind[1534]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:17:44.870792 systemd-logind[1534]: Removed session 17. Sep 9 00:17:49.878088 systemd[1]: Started sshd@17-10.0.0.42:22-10.0.0.1:54400.service - OpenSSH per-connection server daemon (10.0.0.1:54400). Sep 9 00:17:50.068791 sshd[5510]: Accepted publickey for core from 10.0.0.1 port 54400 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:17:50.071711 sshd-session[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:50.078076 systemd-logind[1534]: New session 18 of user core. Sep 9 00:17:50.082952 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:17:50.247558 sshd[5512]: Connection closed by 10.0.0.1 port 54400 Sep 9 00:17:50.247893 sshd-session[5510]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:50.252661 systemd[1]: sshd@17-10.0.0.42:22-10.0.0.1:54400.service: Deactivated successfully. Sep 9 00:17:50.255192 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:17:50.256088 systemd-logind[1534]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:17:50.257356 systemd-logind[1534]: Removed session 18. Sep 9 00:17:51.333628 kubelet[2719]: E0909 00:17:51.333572 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:51.568840 containerd[1558]: time="2025-09-09T00:17:51.568794860Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b\" id:\"d6e654355a8a1fe71be64d3729079cf8f6f0033f7e7a77308c134dc6cdf5f20d\" pid:5536 exited_at:{seconds:1757377071 nanos:568611469}" Sep 9 00:17:52.724590 update_engine[1539]: I20250909 00:17:52.724491 1539 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 9 00:17:52.724590 update_engine[1539]: I20250909 00:17:52.724570 1539 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 9 00:17:52.725402 update_engine[1539]: I20250909 00:17:52.725379 1539 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 9 00:17:52.726281 update_engine[1539]: I20250909 00:17:52.726244 1539 omaha_request_params.cc:62] Current group set to beta Sep 9 00:17:52.726428 update_engine[1539]: I20250909 00:17:52.726402 1539 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 9 00:17:52.726428 update_engine[1539]: I20250909 00:17:52.726415 1539 update_attempter.cc:643] Scheduling an action processor start. Sep 9 00:17:52.726496 update_engine[1539]: I20250909 00:17:52.726438 1539 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 9 00:17:52.726529 update_engine[1539]: I20250909 00:17:52.726493 1539 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 9 00:17:52.726609 update_engine[1539]: I20250909 00:17:52.726565 1539 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 9 00:17:52.726609 update_engine[1539]: I20250909 00:17:52.726583 1539 omaha_request_action.cc:272] Request: Sep 9 00:17:52.726609 update_engine[1539]: Sep 9 00:17:52.726609 update_engine[1539]: Sep 9 00:17:52.726609 update_engine[1539]: Sep 9 00:17:52.726609 update_engine[1539]: Sep 9 00:17:52.726609 update_engine[1539]: Sep 9 00:17:52.726609 update_engine[1539]: Sep 9 00:17:52.726609 update_engine[1539]: Sep 9 00:17:52.726609 update_engine[1539]: Sep 9 00:17:52.726609 update_engine[1539]: I20250909 00:17:52.726590 1539 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:17:52.732949 update_engine[1539]: I20250909 00:17:52.732887 1539 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:17:52.733323 locksmithd[1587]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 9 00:17:52.733667 update_engine[1539]: I20250909 00:17:52.733489 1539 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:17:52.742511 update_engine[1539]: E20250909 00:17:52.742439 1539 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:17:52.742630 update_engine[1539]: I20250909 00:17:52.742576 1539 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 9 00:17:53.333549 kubelet[2719]: E0909 00:17:53.333505 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:17:55.262144 systemd[1]: Started sshd@18-10.0.0.42:22-10.0.0.1:40450.service - OpenSSH per-connection server daemon (10.0.0.1:40450). Sep 9 00:17:55.333822 sshd[5547]: Accepted publickey for core from 10.0.0.1 port 40450 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:17:55.345206 sshd-session[5547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:17:55.350277 systemd-logind[1534]: New session 19 of user core. Sep 9 00:17:55.364944 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:17:55.580990 sshd[5549]: Connection closed by 10.0.0.1 port 40450 Sep 9 00:17:55.581259 sshd-session[5547]: pam_unix(sshd:session): session closed for user core Sep 9 00:17:55.586886 systemd[1]: sshd@18-10.0.0.42:22-10.0.0.1:40450.service: Deactivated successfully. Sep 9 00:17:55.589217 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:17:55.590040 systemd-logind[1534]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:17:55.591360 systemd-logind[1534]: Removed session 19. Sep 9 00:17:59.021391 kernel: hrtimer: interrupt took 2510252 ns Sep 9 00:18:00.599336 systemd[1]: Started sshd@19-10.0.0.42:22-10.0.0.1:57362.service - OpenSSH per-connection server daemon (10.0.0.1:57362). Sep 9 00:18:00.660890 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 57362 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:18:00.662942 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:00.668313 systemd-logind[1534]: New session 20 of user core. Sep 9 00:18:00.684095 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:18:00.827230 sshd[5572]: Connection closed by 10.0.0.1 port 57362 Sep 9 00:18:00.827669 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:00.846325 systemd[1]: sshd@19-10.0.0.42:22-10.0.0.1:57362.service: Deactivated successfully. Sep 9 00:18:00.848605 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:18:00.849842 systemd-logind[1534]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:18:00.853578 systemd[1]: Started sshd@20-10.0.0.42:22-10.0.0.1:57376.service - OpenSSH per-connection server daemon (10.0.0.1:57376). Sep 9 00:18:00.854505 systemd-logind[1534]: Removed session 20. Sep 9 00:18:00.921218 sshd[5585]: Accepted publickey for core from 10.0.0.1 port 57376 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:18:00.922973 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:00.928278 systemd-logind[1534]: New session 21 of user core. Sep 9 00:18:00.937905 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:18:01.730479 containerd[1558]: time="2025-09-09T00:18:01.730433637Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3\" id:\"887ec1548543a2fe14188653f82795f70c17f1f623c59342c512f8c6768b4d0b\" pid:5605 exited_at:{seconds:1757377081 nanos:730062562}" Sep 9 00:18:02.063356 sshd[5587]: Connection closed by 10.0.0.1 port 57376 Sep 9 00:18:02.073713 systemd[1]: sshd@20-10.0.0.42:22-10.0.0.1:57376.service: Deactivated successfully. Sep 9 00:18:02.063969 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:02.076718 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:18:02.077702 systemd-logind[1534]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:18:02.081514 systemd[1]: Started sshd@21-10.0.0.42:22-10.0.0.1:57390.service - OpenSSH per-connection server daemon (10.0.0.1:57390). Sep 9 00:18:02.082498 systemd-logind[1534]: Removed session 21. Sep 9 00:18:02.148032 sshd[5623]: Accepted publickey for core from 10.0.0.1 port 57390 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:18:02.149817 sshd-session[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:02.154836 systemd-logind[1534]: New session 22 of user core. Sep 9 00:18:02.170900 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:18:02.334636 kubelet[2719]: E0909 00:18:02.334387 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:02.698445 update_engine[1539]: I20250909 00:18:02.698328 1539 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:18:02.699004 update_engine[1539]: I20250909 00:18:02.698714 1539 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:18:02.699157 update_engine[1539]: I20250909 00:18:02.699126 1539 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:18:02.706751 update_engine[1539]: E20250909 00:18:02.706662 1539 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:18:02.706828 update_engine[1539]: I20250909 00:18:02.706779 1539 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 9 00:18:03.334317 kubelet[2719]: E0909 00:18:03.334272 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:03.434426 sshd[5625]: Connection closed by 10.0.0.1 port 57390 Sep 9 00:18:03.434857 sshd-session[5623]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:03.445590 systemd[1]: sshd@21-10.0.0.42:22-10.0.0.1:57390.service: Deactivated successfully. Sep 9 00:18:03.448243 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:18:03.449205 systemd-logind[1534]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:18:03.453333 systemd[1]: Started sshd@22-10.0.0.42:22-10.0.0.1:57394.service - OpenSSH per-connection server daemon (10.0.0.1:57394). Sep 9 00:18:03.454241 systemd-logind[1534]: Removed session 22. Sep 9 00:18:03.516323 sshd[5661]: Accepted publickey for core from 10.0.0.1 port 57394 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:18:03.518355 sshd-session[5661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:03.523455 systemd-logind[1534]: New session 23 of user core. Sep 9 00:18:03.539892 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:18:04.192401 sshd[5663]: Connection closed by 10.0.0.1 port 57394 Sep 9 00:18:04.192790 sshd-session[5661]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:04.202192 systemd[1]: sshd@22-10.0.0.42:22-10.0.0.1:57394.service: Deactivated successfully. Sep 9 00:18:04.204935 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:18:04.205829 systemd-logind[1534]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:18:04.209536 systemd[1]: Started sshd@23-10.0.0.42:22-10.0.0.1:57410.service - OpenSSH per-connection server daemon (10.0.0.1:57410). Sep 9 00:18:04.210492 systemd-logind[1534]: Removed session 23. Sep 9 00:18:04.268277 sshd[5675]: Accepted publickey for core from 10.0.0.1 port 57410 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:18:04.269906 sshd-session[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:04.274583 systemd-logind[1534]: New session 24 of user core. Sep 9 00:18:04.278917 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:18:04.395597 sshd[5677]: Connection closed by 10.0.0.1 port 57410 Sep 9 00:18:04.395937 sshd-session[5675]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:04.400423 systemd[1]: sshd@23-10.0.0.42:22-10.0.0.1:57410.service: Deactivated successfully. Sep 9 00:18:04.402821 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:18:04.403662 systemd-logind[1534]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:18:04.405777 systemd-logind[1534]: Removed session 24. Sep 9 00:18:09.423050 systemd[1]: Started sshd@24-10.0.0.42:22-10.0.0.1:57418.service - OpenSSH per-connection server daemon (10.0.0.1:57418). Sep 9 00:18:09.467872 sshd[5692]: Accepted publickey for core from 10.0.0.1 port 57418 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:18:09.469264 sshd-session[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:09.473834 systemd-logind[1534]: New session 25 of user core. Sep 9 00:18:09.483873 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:18:09.597312 sshd[5694]: Connection closed by 10.0.0.1 port 57418 Sep 9 00:18:09.597621 sshd-session[5692]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:09.601882 systemd[1]: sshd@24-10.0.0.42:22-10.0.0.1:57418.service: Deactivated successfully. Sep 9 00:18:09.604217 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:18:09.605109 systemd-logind[1534]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:18:09.606533 systemd-logind[1534]: Removed session 25. Sep 9 00:18:11.540213 containerd[1558]: time="2025-09-09T00:18:11.539781720Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bd53949aede677d5748eb161b441c726670ddbde9d0b0c8d3bd1bfabaeafdc73\" id:\"8388a342e29bc8b7025a7bfa5859f9067ce909e65a30a90a9273da42686da4cf\" pid:5717 exited_at:{seconds:1757377091 nanos:539337757}" Sep 9 00:18:12.700168 update_engine[1539]: I20250909 00:18:12.700035 1539 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:18:12.701157 update_engine[1539]: I20250909 00:18:12.701124 1539 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:18:12.701425 update_engine[1539]: I20250909 00:18:12.701390 1539 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:18:12.709170 update_engine[1539]: E20250909 00:18:12.709028 1539 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:18:12.709170 update_engine[1539]: I20250909 00:18:12.709134 1539 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 9 00:18:14.610987 systemd[1]: Started sshd@25-10.0.0.42:22-10.0.0.1:52776.service - OpenSSH per-connection server daemon (10.0.0.1:52776). Sep 9 00:18:14.680082 sshd[5730]: Accepted publickey for core from 10.0.0.1 port 52776 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:18:14.682027 sshd-session[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:14.688706 systemd-logind[1534]: New session 26 of user core. Sep 9 00:18:14.693920 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:18:14.996922 sshd[5732]: Connection closed by 10.0.0.1 port 52776 Sep 9 00:18:14.997309 sshd-session[5730]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:15.002872 systemd[1]: sshd@25-10.0.0.42:22-10.0.0.1:52776.service: Deactivated successfully. Sep 9 00:18:15.007535 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:18:15.009011 systemd-logind[1534]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:18:15.010827 systemd-logind[1534]: Removed session 26. Sep 9 00:18:16.467276 containerd[1558]: time="2025-09-09T00:18:16.467181309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3\" id:\"9b107aec87d053c80c7bbf1a6205a7672f2c2e632fc797e34961d135525eabca\" pid:5755 exited_at:{seconds:1757377096 nanos:466778185}" Sep 9 00:18:20.010634 systemd[1]: Started sshd@26-10.0.0.42:22-10.0.0.1:55706.service - OpenSSH per-connection server daemon (10.0.0.1:55706). Sep 9 00:18:20.073935 sshd[5769]: Accepted publickey for core from 10.0.0.1 port 55706 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:18:20.075563 sshd-session[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:20.080323 systemd-logind[1534]: New session 27 of user core. Sep 9 00:18:20.087906 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 00:18:20.353151 sshd[5771]: Connection closed by 10.0.0.1 port 55706 Sep 9 00:18:20.355056 sshd-session[5769]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:20.359358 systemd-logind[1534]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:18:20.360219 systemd[1]: sshd@26-10.0.0.42:22-10.0.0.1:55706.service: Deactivated successfully. Sep 9 00:18:20.364478 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:18:20.369027 systemd-logind[1534]: Removed session 27. Sep 9 00:18:21.591803 containerd[1558]: time="2025-09-09T00:18:21.591706476Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b\" id:\"92e02099d729240c47c05e2750bbbb8bf178185c1491e1e7fefa55bebefecbbb\" pid:5794 exited_at:{seconds:1757377101 nanos:591374868}" Sep 9 00:18:22.699627 update_engine[1539]: I20250909 00:18:22.699537 1539 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:18:22.700151 update_engine[1539]: I20250909 00:18:22.699851 1539 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:18:22.700226 update_engine[1539]: I20250909 00:18:22.700176 1539 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:18:22.710023 update_engine[1539]: E20250909 00:18:22.709964 1539 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:18:22.710158 update_engine[1539]: I20250909 00:18:22.710022 1539 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 9 00:18:22.710158 update_engine[1539]: I20250909 00:18:22.710036 1539 omaha_request_action.cc:617] Omaha request response: Sep 9 00:18:22.710227 update_engine[1539]: E20250909 00:18:22.710177 1539 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 9 00:18:22.710227 update_engine[1539]: I20250909 00:18:22.710211 1539 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 9 00:18:22.710227 update_engine[1539]: I20250909 00:18:22.710219 1539 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 00:18:22.710321 update_engine[1539]: I20250909 00:18:22.710227 1539 update_attempter.cc:306] Processing Done. Sep 9 00:18:22.839091 update_engine[1539]: E20250909 00:18:22.838894 1539 update_attempter.cc:619] Update failed. Sep 9 00:18:22.839091 update_engine[1539]: I20250909 00:18:22.839032 1539 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 9 00:18:22.839091 update_engine[1539]: I20250909 00:18:22.839039 1539 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 9 00:18:22.839091 update_engine[1539]: I20250909 00:18:22.839047 1539 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 9 00:18:22.846503 update_engine[1539]: I20250909 00:18:22.839139 1539 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 9 00:18:22.846503 update_engine[1539]: I20250909 00:18:22.839169 1539 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 9 00:18:22.846503 update_engine[1539]: I20250909 00:18:22.839175 1539 omaha_request_action.cc:272] Request: Sep 9 00:18:22.846503 update_engine[1539]: Sep 9 00:18:22.846503 update_engine[1539]: Sep 9 00:18:22.846503 update_engine[1539]: Sep 9 00:18:22.846503 update_engine[1539]: Sep 9 00:18:22.846503 update_engine[1539]: Sep 9 00:18:22.846503 update_engine[1539]: Sep 9 00:18:22.846503 update_engine[1539]: I20250909 00:18:22.839183 1539 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:18:22.846503 update_engine[1539]: I20250909 00:18:22.839388 1539 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:18:22.846503 update_engine[1539]: I20250909 00:18:22.839623 1539 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:18:22.846772 locksmithd[1587]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 9 00:18:22.852326 update_engine[1539]: E20250909 00:18:22.852268 1539 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:18:22.852374 update_engine[1539]: I20250909 00:18:22.852349 1539 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 9 00:18:22.852374 update_engine[1539]: I20250909 00:18:22.852360 1539 omaha_request_action.cc:617] Omaha request response: Sep 9 00:18:22.852374 update_engine[1539]: I20250909 00:18:22.852368 1539 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 00:18:22.852440 update_engine[1539]: I20250909 00:18:22.852375 1539 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 9 00:18:22.852440 update_engine[1539]: I20250909 00:18:22.852381 1539 update_attempter.cc:306] Processing Done. Sep 9 00:18:22.852440 update_engine[1539]: I20250909 00:18:22.852388 1539 update_attempter.cc:310] Error event sent. Sep 9 00:18:22.852440 update_engine[1539]: I20250909 00:18:22.852401 1539 update_check_scheduler.cc:74] Next update check in 40m35s Sep 9 00:18:22.852906 locksmithd[1587]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 9 00:18:23.334302 kubelet[2719]: E0909 00:18:23.334240 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:25.373152 systemd[1]: Started sshd@27-10.0.0.42:22-10.0.0.1:55714.service - OpenSSH per-connection server daemon (10.0.0.1:55714). Sep 9 00:18:25.427051 sshd[5807]: Accepted publickey for core from 10.0.0.1 port 55714 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:18:25.429006 sshd-session[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:25.434108 systemd-logind[1534]: New session 28 of user core. Sep 9 00:18:25.439959 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 00:18:25.837577 sshd[5809]: Connection closed by 10.0.0.1 port 55714 Sep 9 00:18:25.838001 sshd-session[5807]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:25.843305 systemd[1]: sshd@27-10.0.0.42:22-10.0.0.1:55714.service: Deactivated successfully. Sep 9 00:18:25.845830 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 00:18:25.846876 systemd-logind[1534]: Session 28 logged out. Waiting for processes to exit. Sep 9 00:18:25.848824 systemd-logind[1534]: Removed session 28. Sep 9 00:18:26.334060 kubelet[2719]: E0909 00:18:26.334003 2719 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:18:30.858827 systemd[1]: Started sshd@28-10.0.0.42:22-10.0.0.1:47672.service - OpenSSH per-connection server daemon (10.0.0.1:47672). Sep 9 00:18:30.921390 sshd[5827]: Accepted publickey for core from 10.0.0.1 port 47672 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:18:30.923341 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:30.929168 systemd-logind[1534]: New session 29 of user core. Sep 9 00:18:30.935943 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 00:18:31.063849 sshd[5830]: Connection closed by 10.0.0.1 port 47672 Sep 9 00:18:31.064184 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:31.068946 systemd[1]: sshd@28-10.0.0.42:22-10.0.0.1:47672.service: Deactivated successfully. Sep 9 00:18:31.071619 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 00:18:31.072448 systemd-logind[1534]: Session 29 logged out. Waiting for processes to exit. Sep 9 00:18:31.073878 systemd-logind[1534]: Removed session 29. Sep 9 00:18:31.719211 containerd[1558]: time="2025-09-09T00:18:31.719159049Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1181bd5a3f5529bf79495fbb585b42c60b9d28c3cd39214cc64b97e502d07ef3\" id:\"b88d6c9c200f7fb9ef9150beeefc934cd2235e70a24740f2d4d31ac937732591\" pid:5854 exited_at:{seconds:1757377111 nanos:718721022}" Sep 9 00:18:36.079221 systemd[1]: Started sshd@29-10.0.0.42:22-10.0.0.1:47678.service - OpenSSH per-connection server daemon (10.0.0.1:47678). Sep 9 00:18:36.144607 sshd[5869]: Accepted publickey for core from 10.0.0.1 port 47678 ssh2: RSA SHA256:IbA9FJg7nebsC6CoygaCnKgH4vmO8r1PFW0NTspVTTQ Sep 9 00:18:36.146124 sshd-session[5869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:36.150953 systemd-logind[1534]: New session 30 of user core. Sep 9 00:18:36.162877 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 00:18:36.341590 sshd[5871]: Connection closed by 10.0.0.1 port 47678 Sep 9 00:18:36.342162 sshd-session[5869]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:36.346584 systemd[1]: sshd@29-10.0.0.42:22-10.0.0.1:47678.service: Deactivated successfully. Sep 9 00:18:36.348699 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 00:18:36.349914 systemd-logind[1534]: Session 30 logged out. Waiting for processes to exit. Sep 9 00:18:36.351622 systemd-logind[1534]: Removed session 30. Sep 9 00:18:38.400793 containerd[1558]: time="2025-09-09T00:18:38.400753635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a4224148a0ff3ca0cdf8c31ea6f3270b39608a6aafd2ad5703ef83513b96bf4b\" id:\"82293ccbe90ba247ba7303f144b946e35d0f92aa5af108cbb3bb9a736353781f\" pid:5903 exited_at:{seconds:1757377118 nanos:400533780}"