Sep 5 00:09:24.985253 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:33:49 -00 2025 Sep 5 00:09:24.985280 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:09:24.985292 kernel: BIOS-provided physical RAM map: Sep 5 00:09:24.985299 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 5 00:09:24.985305 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 5 00:09:24.985311 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 5 00:09:24.985319 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 5 00:09:24.985327 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 5 00:09:24.985336 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 00:09:24.985347 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 5 00:09:24.985356 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:09:24.985365 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 5 00:09:24.985378 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:09:24.985384 kernel: NX (Execute Disable) protection: active Sep 5 00:09:24.985392 kernel: APIC: Static calls initialized Sep 5 00:09:24.985405 kernel: SMBIOS 2.8 present. Sep 5 00:09:24.985412 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 5 00:09:24.985497 kernel: Hypervisor detected: KVM Sep 5 00:09:24.985504 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:09:24.985511 kernel: kvm-clock: using sched offset of 2902895378 cycles Sep 5 00:09:24.985518 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:09:24.985528 kernel: tsc: Detected 2794.750 MHz processor Sep 5 00:09:24.985537 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:09:24.985547 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:09:24.985556 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 5 00:09:24.985571 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 5 00:09:24.985580 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:09:24.985586 kernel: Using GB pages for direct mapping Sep 5 00:09:24.985593 kernel: ACPI: Early table checksum verification disabled Sep 5 00:09:24.985600 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 5 00:09:24.985607 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:09:24.985614 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:09:24.985621 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:09:24.985631 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 5 00:09:24.985651 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:09:24.985660 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:09:24.985671 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:09:24.985680 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:09:24.985689 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 5 00:09:24.985696 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 5 00:09:24.985708 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 5 00:09:24.985718 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 5 00:09:24.985725 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 5 00:09:24.985732 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 5 00:09:24.985739 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 5 00:09:24.985751 kernel: No NUMA configuration found Sep 5 00:09:24.985761 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 5 00:09:24.985770 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 5 00:09:24.985784 kernel: Zone ranges: Sep 5 00:09:24.985792 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:09:24.985802 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 5 00:09:24.985811 kernel: Normal empty Sep 5 00:09:24.985818 kernel: Movable zone start for each node Sep 5 00:09:24.985825 kernel: Early memory node ranges Sep 5 00:09:24.985832 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 5 00:09:24.985839 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 5 00:09:24.985846 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 5 00:09:24.985857 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:09:24.985871 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 5 00:09:24.985880 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 5 00:09:24.985890 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:09:24.985899 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:09:24.985909 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:09:24.985918 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:09:24.985925 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:09:24.985932 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:09:24.985943 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:09:24.985950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:09:24.985957 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:09:24.985965 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:09:24.985975 kernel: TSC deadline timer available Sep 5 00:09:24.985986 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 5 00:09:24.985995 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:09:24.986005 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:09:24.986017 kernel: kvm-guest: setup PV sched yield Sep 5 00:09:24.986030 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 5 00:09:24.986039 kernel: Booting paravirtualized kernel on KVM Sep 5 00:09:24.986048 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:09:24.986058 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:09:24.986068 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 5 00:09:24.986075 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 5 00:09:24.986083 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:09:24.986090 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:09:24.986097 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:09:24.986108 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:09:24.986116 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:09:24.986123 kernel: random: crng init done Sep 5 00:09:24.986133 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:09:24.986143 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:09:24.986153 kernel: Fallback order for Node 0: 0 Sep 5 00:09:24.986163 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 5 00:09:24.986172 kernel: Policy zone: DMA32 Sep 5 00:09:24.986184 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:09:24.986192 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42872K init, 2324K bss, 136900K reserved, 0K cma-reserved) Sep 5 00:09:24.986199 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:09:24.986206 kernel: ftrace: allocating 37969 entries in 149 pages Sep 5 00:09:24.986213 kernel: ftrace: allocated 149 pages with 4 groups Sep 5 00:09:24.986221 kernel: Dynamic Preempt: voluntary Sep 5 00:09:24.986228 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:09:24.986237 kernel: rcu: RCU event tracing is enabled. Sep 5 00:09:24.986247 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:09:24.986261 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:09:24.986272 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:09:24.986281 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:09:24.986289 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:09:24.986300 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:09:24.986307 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:09:24.986315 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:09:24.986322 kernel: Console: colour VGA+ 80x25 Sep 5 00:09:24.986329 kernel: printk: console [ttyS0] enabled Sep 5 00:09:24.986337 kernel: ACPI: Core revision 20230628 Sep 5 00:09:24.986350 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:09:24.986360 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:09:24.986370 kernel: x2apic enabled Sep 5 00:09:24.986380 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:09:24.986390 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:09:24.986398 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:09:24.986405 kernel: kvm-guest: setup PV IPIs Sep 5 00:09:24.986483 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:09:24.986495 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 5 00:09:24.986505 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 5 00:09:24.986516 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:09:24.986527 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:09:24.986534 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:09:24.986542 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:09:24.986550 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:09:24.986561 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:09:24.986574 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:09:24.986585 kernel: active return thunk: retbleed_return_thunk Sep 5 00:09:24.986600 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:09:24.986608 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:09:24.986616 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:09:24.986624 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:09:24.986639 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:09:24.986647 kernel: active return thunk: srso_return_thunk Sep 5 00:09:24.986661 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:09:24.986670 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:09:24.986680 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:09:24.986690 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:09:24.986700 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:09:24.986711 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:09:24.986719 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:09:24.986727 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:09:24.986734 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:09:24.986745 kernel: landlock: Up and running. Sep 5 00:09:24.986753 kernel: SELinux: Initializing. Sep 5 00:09:24.986761 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:09:24.986772 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:09:24.986781 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:09:24.986791 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:09:24.986802 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:09:24.986813 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:09:24.986826 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:09:24.986837 kernel: ... version: 0 Sep 5 00:09:24.986845 kernel: ... bit width: 48 Sep 5 00:09:24.986852 kernel: ... generic registers: 6 Sep 5 00:09:24.986860 kernel: ... value mask: 0000ffffffffffff Sep 5 00:09:24.986868 kernel: ... max period: 00007fffffffffff Sep 5 00:09:24.986878 kernel: ... fixed-purpose events: 0 Sep 5 00:09:24.986888 kernel: ... event mask: 000000000000003f Sep 5 00:09:24.986898 kernel: signal: max sigframe size: 1776 Sep 5 00:09:24.986909 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:09:24.986923 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:09:24.986930 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:09:24.986938 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:09:24.986946 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:09:24.986953 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:09:24.986961 kernel: smpboot: Max logical packages: 1 Sep 5 00:09:24.986968 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 5 00:09:24.986977 kernel: devtmpfs: initialized Sep 5 00:09:24.986987 kernel: x86/mm: Memory block size: 128MB Sep 5 00:09:24.986997 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:09:24.987010 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:09:24.987021 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:09:24.987031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:09:24.987039 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:09:24.987046 kernel: audit: type=2000 audit(1757030964.980:1): state=initialized audit_enabled=0 res=1 Sep 5 00:09:24.987054 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:09:24.987062 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:09:24.987069 kernel: cpuidle: using governor menu Sep 5 00:09:24.987080 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:09:24.987087 kernel: dca service started, version 1.12.1 Sep 5 00:09:24.987098 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 5 00:09:24.987108 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 00:09:24.987119 kernel: PCI: Using configuration type 1 for base access Sep 5 00:09:24.987129 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:09:24.987139 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:09:24.987148 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:09:24.987155 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:09:24.987166 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:09:24.987174 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:09:24.987181 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:09:24.987189 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:09:24.987197 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:09:24.987208 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 5 00:09:24.987218 kernel: ACPI: Interpreter enabled Sep 5 00:09:24.987228 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:09:24.987239 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:09:24.987252 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:09:24.987259 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:09:24.987267 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:09:24.987275 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:09:24.987559 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:09:24.987728 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:09:24.987860 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:09:24.987870 kernel: PCI host bridge to bus 0000:00 Sep 5 00:09:24.988033 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:09:24.988153 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:09:24.988281 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:09:24.988438 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 00:09:24.988564 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 00:09:24.988690 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 5 00:09:24.988813 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:09:24.988976 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 5 00:09:24.989154 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 5 00:09:24.989379 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 5 00:09:24.989572 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 5 00:09:24.989711 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 5 00:09:24.989837 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:09:24.990002 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 00:09:24.990156 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 5 00:09:24.990301 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 5 00:09:24.990470 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 5 00:09:24.990681 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 5 00:09:24.990888 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 5 00:09:24.991054 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 5 00:09:24.991226 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 5 00:09:24.991409 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 5 00:09:24.991584 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 5 00:09:24.991741 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 5 00:09:24.991881 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 5 00:09:24.992034 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 5 00:09:24.992201 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 5 00:09:24.992465 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:09:24.992665 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 5 00:09:24.992812 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 5 00:09:24.992959 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 5 00:09:24.993117 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 5 00:09:24.993245 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 5 00:09:24.993261 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:09:24.993269 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:09:24.993277 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:09:24.993308 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:09:24.993330 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:09:24.993342 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:09:24.993350 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:09:24.993357 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:09:24.993365 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:09:24.993378 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:09:24.993391 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:09:24.993399 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:09:24.993409 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:09:24.993436 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:09:24.993447 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:09:24.993455 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:09:24.993463 kernel: iommu: Default domain type: Translated Sep 5 00:09:24.993471 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:09:24.993483 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:09:24.993491 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:09:24.993499 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 5 00:09:24.993507 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 5 00:09:24.993654 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:09:24.993781 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:09:24.993907 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:09:24.993917 kernel: vgaarb: loaded Sep 5 00:09:24.993925 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:09:24.993937 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:09:24.993945 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:09:24.993953 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:09:24.993961 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:09:24.993968 kernel: pnp: PnP ACPI init Sep 5 00:09:24.994128 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 00:09:24.994140 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:09:24.994148 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:09:24.994162 kernel: NET: Registered PF_INET protocol family Sep 5 00:09:24.994173 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:09:24.994184 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:09:24.994194 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:09:24.994205 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:09:24.994215 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:09:24.994226 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:09:24.994236 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:09:24.994245 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:09:24.994256 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:09:24.994263 kernel: NET: Registered PF_XDP protocol family Sep 5 00:09:24.994405 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:09:24.994625 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:09:24.994751 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:09:24.994864 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 00:09:24.994976 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 00:09:24.995091 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 5 00:09:24.995112 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:09:24.995122 kernel: Initialise system trusted keyrings Sep 5 00:09:24.995133 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:09:24.995143 kernel: Key type asymmetric registered Sep 5 00:09:24.995153 kernel: Asymmetric key parser 'x509' registered Sep 5 00:09:24.995160 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 00:09:24.995168 kernel: io scheduler mq-deadline registered Sep 5 00:09:24.995176 kernel: io scheduler kyber registered Sep 5 00:09:24.995183 kernel: io scheduler bfq registered Sep 5 00:09:24.995195 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:09:24.995203 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:09:24.995212 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:09:24.995220 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:09:24.995227 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:09:24.995235 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:09:24.995243 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:09:24.995250 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:09:24.995258 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:09:24.995437 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:09:24.995451 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:09:24.995587 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:09:24.995734 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:09:24 UTC (1757030964) Sep 5 00:09:24.995856 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 00:09:24.995867 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:09:24.995874 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:09:24.995882 kernel: Segment Routing with IPv6 Sep 5 00:09:24.995898 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:09:24.995908 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:09:24.995919 kernel: Key type dns_resolver registered Sep 5 00:09:24.995928 kernel: IPI shorthand broadcast: enabled Sep 5 00:09:24.995939 kernel: sched_clock: Marking stable (857004143, 121536676)->(1000281217, -21740398) Sep 5 00:09:24.995948 kernel: registered taskstats version 1 Sep 5 00:09:24.995956 kernel: Loading compiled-in X.509 certificates Sep 5 00:09:24.995964 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: fbb6a9f06c02a4dbdf06d4c5d95c782040e8492c' Sep 5 00:09:24.995972 kernel: Key type .fscrypt registered Sep 5 00:09:24.995983 kernel: Key type fscrypt-provisioning registered Sep 5 00:09:24.995991 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:09:24.996002 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:09:24.996011 kernel: ima: No architecture policies found Sep 5 00:09:24.996021 kernel: clk: Disabling unused clocks Sep 5 00:09:24.996031 kernel: Freeing unused kernel image (initmem) memory: 42872K Sep 5 00:09:24.996041 kernel: Write protecting the kernel read-only data: 36864k Sep 5 00:09:24.996052 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 5 00:09:24.996059 kernel: Run /init as init process Sep 5 00:09:24.996070 kernel: with arguments: Sep 5 00:09:24.996078 kernel: /init Sep 5 00:09:24.996086 kernel: with environment: Sep 5 00:09:24.996093 kernel: HOME=/ Sep 5 00:09:24.996101 kernel: TERM=linux Sep 5 00:09:24.996108 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:09:24.996118 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:09:24.996128 systemd[1]: Detected virtualization kvm. Sep 5 00:09:24.996139 systemd[1]: Detected architecture x86-64. Sep 5 00:09:24.996148 systemd[1]: Running in initrd. Sep 5 00:09:24.996155 systemd[1]: No hostname configured, using default hostname. Sep 5 00:09:24.996163 systemd[1]: Hostname set to . Sep 5 00:09:24.996172 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:09:24.996180 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:09:24.996188 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:09:24.996196 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:09:24.996209 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:09:24.996237 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:09:24.996251 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:09:24.996263 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:09:24.996274 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:09:24.996286 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:09:24.996296 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:09:24.996305 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:09:24.996313 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:09:24.996324 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:09:24.996335 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:09:24.996346 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:09:24.996358 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:09:24.996371 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:09:24.996379 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:09:24.996388 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:09:24.996396 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:09:24.996407 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:09:24.996455 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:09:24.996468 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:09:24.996479 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:09:24.996492 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:09:24.996500 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:09:24.996508 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:09:24.996517 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:09:24.996525 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:09:24.996534 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:09:24.996545 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:09:24.996556 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:09:24.996568 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:09:24.996606 systemd-journald[192]: Collecting audit messages is disabled. Sep 5 00:09:24.996630 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:09:24.996653 systemd-journald[192]: Journal started Sep 5 00:09:24.996680 systemd-journald[192]: Runtime Journal (/run/log/journal/4393993edc344840b25ea2453363bd9a) is 6.0M, max 48.4M, 42.3M free. Sep 5 00:09:24.991110 systemd-modules-load[193]: Inserted module 'overlay' Sep 5 00:09:25.025208 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:09:25.025233 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:09:25.025247 kernel: Bridge firewalling registered Sep 5 00:09:25.018692 systemd-modules-load[193]: Inserted module 'br_netfilter' Sep 5 00:09:25.024263 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:09:25.026137 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:09:25.048665 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:09:25.049982 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:09:25.051079 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:09:25.064130 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:09:25.072656 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:09:25.074407 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:09:25.076837 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:09:25.079731 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:09:25.086612 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:09:25.100691 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:09:25.103333 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:09:25.114107 dracut-cmdline[227]: dracut-dracut-053 Sep 5 00:09:25.118272 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:09:25.137455 systemd-resolved[229]: Positive Trust Anchors: Sep 5 00:09:25.137473 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:09:25.137508 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:09:25.140161 systemd-resolved[229]: Defaulting to hostname 'linux'. Sep 5 00:09:25.141407 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:09:25.148233 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:09:25.209458 kernel: SCSI subsystem initialized Sep 5 00:09:25.241445 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:09:25.252448 kernel: iscsi: registered transport (tcp) Sep 5 00:09:25.273844 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:09:25.273889 kernel: QLogic iSCSI HBA Driver Sep 5 00:09:25.325667 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:09:25.340700 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:09:25.368328 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:09:25.368404 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:09:25.368438 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:09:25.410451 kernel: raid6: avx2x4 gen() 30147 MB/s Sep 5 00:09:25.427442 kernel: raid6: avx2x2 gen() 30672 MB/s Sep 5 00:09:25.444619 kernel: raid6: avx2x1 gen() 25674 MB/s Sep 5 00:09:25.444663 kernel: raid6: using algorithm avx2x2 gen() 30672 MB/s Sep 5 00:09:25.462647 kernel: raid6: .... xor() 19680 MB/s, rmw enabled Sep 5 00:09:25.462741 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:09:25.483459 kernel: xor: automatically using best checksumming function avx Sep 5 00:09:25.642467 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:09:25.656962 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:09:25.667561 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:09:25.681365 systemd-udevd[412]: Using default interface naming scheme 'v255'. Sep 5 00:09:25.686403 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:09:25.698630 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:09:25.713822 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Sep 5 00:09:25.748353 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:09:25.761612 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:09:25.834654 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:09:25.844643 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:09:25.860186 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:09:25.863094 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:09:25.866477 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:09:25.868927 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:09:25.878626 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:09:25.888443 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:09:25.888713 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:09:25.892624 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:09:25.892195 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:09:25.900820 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:09:25.900856 kernel: GPT:9289727 != 19775487 Sep 5 00:09:25.900868 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:09:25.900878 kernel: GPT:9289727 != 19775487 Sep 5 00:09:25.901918 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:09:25.901939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:09:25.906572 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:09:25.906731 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:09:25.910481 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:09:25.910970 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:09:25.911133 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:09:25.911380 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:09:25.921461 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 00:09:25.921502 kernel: libata version 3.00 loaded. Sep 5 00:09:25.921525 kernel: AES CTR mode by8 optimization enabled Sep 5 00:09:25.927707 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:09:25.933436 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:09:25.935469 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:09:25.943447 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 5 00:09:25.943680 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:09:25.945221 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Sep 5 00:09:25.950482 kernel: BTRFS: device fsid 3713859d-e283-4add-80dc-7ca8465b1d1d devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (462) Sep 5 00:09:25.961127 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:09:25.965461 kernel: scsi host0: ahci Sep 5 00:09:25.966444 kernel: scsi host1: ahci Sep 5 00:09:25.970347 kernel: scsi host2: ahci Sep 5 00:09:25.973172 kernel: scsi host3: ahci Sep 5 00:09:25.973549 kernel: scsi host4: ahci Sep 5 00:09:25.974471 kernel: scsi host5: ahci Sep 5 00:09:25.974659 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 5 00:09:25.974679 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 5 00:09:25.974690 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 5 00:09:25.974701 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 5 00:09:25.974711 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 5 00:09:25.974721 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 5 00:09:25.975706 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:09:26.011794 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:09:26.014447 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:09:26.015195 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:09:26.024198 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:09:26.045579 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:09:26.048130 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:09:26.056361 disk-uuid[554]: Primary Header is updated. Sep 5 00:09:26.056361 disk-uuid[554]: Secondary Entries is updated. Sep 5 00:09:26.056361 disk-uuid[554]: Secondary Header is updated. Sep 5 00:09:26.060216 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:09:26.064463 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:09:26.070882 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:09:26.318902 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:09:26.318983 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:09:26.318997 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:09:26.320462 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:09:26.322003 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:09:26.322018 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:09:26.322029 kernel: ata3.00: applying bridge limits Sep 5 00:09:26.323451 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:09:26.323467 kernel: ata3.00: configured for UDMA/100 Sep 5 00:09:26.324448 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:09:26.370450 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:09:26.370715 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:09:26.388467 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:09:27.107487 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:09:27.107798 disk-uuid[556]: The operation has completed successfully. Sep 5 00:09:27.139483 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:09:27.139627 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:09:27.161759 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:09:27.165339 sh[592]: Success Sep 5 00:09:27.179449 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 5 00:09:27.215288 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:09:27.224054 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:09:27.227533 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:09:27.243242 kernel: BTRFS info (device dm-0): first mount of filesystem 3713859d-e283-4add-80dc-7ca8465b1d1d Sep 5 00:09:27.243315 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:09:27.243331 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:09:27.244214 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:09:27.244931 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:09:27.250055 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:09:27.252469 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:09:27.276648 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:09:27.279332 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:09:27.289525 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:09:27.289556 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:09:27.289568 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:09:27.292449 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:09:27.302842 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 00:09:27.304844 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:09:27.314643 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:09:27.322661 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:09:27.458360 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:09:27.616207 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:09:27.639696 systemd-networkd[773]: lo: Link UP Sep 5 00:09:27.639706 systemd-networkd[773]: lo: Gained carrier Sep 5 00:09:27.641340 systemd-networkd[773]: Enumeration completed Sep 5 00:09:27.641606 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:09:27.641785 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:09:27.641790 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:09:27.642691 systemd-networkd[773]: eth0: Link UP Sep 5 00:09:27.642694 systemd-networkd[773]: eth0: Gained carrier Sep 5 00:09:27.649484 ignition[686]: Ignition 2.19.0 Sep 5 00:09:27.642701 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:09:27.649494 ignition[686]: Stage: fetch-offline Sep 5 00:09:27.643920 systemd[1]: Reached target network.target - Network. Sep 5 00:09:27.649559 ignition[686]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:09:27.649583 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:09:27.649714 ignition[686]: parsed url from cmdline: "" Sep 5 00:09:27.657477 systemd-networkd[773]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:09:27.649718 ignition[686]: no config URL provided Sep 5 00:09:27.649724 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:09:27.649733 ignition[686]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:09:27.649767 ignition[686]: op(1): [started] loading QEMU firmware config module Sep 5 00:09:27.649772 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:09:27.660316 ignition[686]: op(1): [finished] loading QEMU firmware config module Sep 5 00:09:27.699885 ignition[686]: parsing config with SHA512: 50a6b87fe6c47d10ea942d11d651417d795fa0de2e919c54fea41be677b8fdc627426b5f46e480fab4f95764d27c02149ea6b416c427a74e9142b1531434cf16 Sep 5 00:09:27.703366 unknown[686]: fetched base config from "system" Sep 5 00:09:27.703381 unknown[686]: fetched user config from "qemu" Sep 5 00:09:27.703755 ignition[686]: fetch-offline: fetch-offline passed Sep 5 00:09:27.703824 ignition[686]: Ignition finished successfully Sep 5 00:09:27.706688 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:09:27.708699 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:09:27.719595 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:09:27.750441 ignition[784]: Ignition 2.19.0 Sep 5 00:09:27.750452 ignition[784]: Stage: kargs Sep 5 00:09:27.750648 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:09:27.750662 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:09:27.751527 ignition[784]: kargs: kargs passed Sep 5 00:09:27.751584 ignition[784]: Ignition finished successfully Sep 5 00:09:27.755623 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:09:27.766591 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:09:27.785290 ignition[792]: Ignition 2.19.0 Sep 5 00:09:27.785303 ignition[792]: Stage: disks Sep 5 00:09:27.785507 ignition[792]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:09:27.785523 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:09:27.786339 ignition[792]: disks: disks passed Sep 5 00:09:27.788757 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:09:27.786389 ignition[792]: Ignition finished successfully Sep 5 00:09:27.790617 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:09:27.792873 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:09:27.794820 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:09:27.796881 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:09:27.799058 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:09:27.811582 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:09:27.827482 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:09:27.835107 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:09:27.840632 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:09:27.934462 kernel: EXT4-fs (vda9): mounted filesystem 83287606-d110-4d13-a801-c8d88205bd5a r/w with ordered data mode. Quota mode: none. Sep 5 00:09:27.935115 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:09:27.936055 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:09:27.950600 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:09:27.953366 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:09:27.954223 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:09:27.954278 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:09:27.954309 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:09:27.963913 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:09:27.967306 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:09:27.969316 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Sep 5 00:09:27.971566 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:09:27.971592 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:09:27.971603 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:09:27.974437 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:09:27.975786 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:09:28.009490 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:09:28.014988 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:09:28.020163 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:09:28.024274 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:09:28.121954 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:09:28.134542 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:09:28.136255 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:09:28.144479 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:09:28.164808 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:09:28.173044 ignition[924]: INFO : Ignition 2.19.0 Sep 5 00:09:28.173044 ignition[924]: INFO : Stage: mount Sep 5 00:09:28.174695 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:09:28.174695 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:09:28.174695 ignition[924]: INFO : mount: mount passed Sep 5 00:09:28.174695 ignition[924]: INFO : Ignition finished successfully Sep 5 00:09:28.179774 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:09:28.191577 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:09:28.242539 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:09:28.255591 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:09:28.263454 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (936) Sep 5 00:09:28.265687 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:09:28.265719 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:09:28.265730 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:09:28.268447 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:09:28.270112 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:09:28.296328 ignition[953]: INFO : Ignition 2.19.0 Sep 5 00:09:28.296328 ignition[953]: INFO : Stage: files Sep 5 00:09:28.298169 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:09:28.298169 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:09:28.298169 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:09:28.301514 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:09:28.301514 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:09:28.301514 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:09:28.301514 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:09:28.306964 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:09:28.306964 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:09:28.306964 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 5 00:09:28.301603 unknown[953]: wrote ssh authorized keys file for user: core Sep 5 00:09:28.348537 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:09:28.547769 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 5 00:09:28.547769 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:09:28.552050 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:09:28.552050 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:09:28.555482 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:09:28.555482 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:09:28.558901 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:09:28.560603 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:09:28.562563 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:09:28.564731 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:09:28.566659 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:09:28.568390 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:09:28.570881 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:09:28.574101 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:09:28.576145 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 5 00:09:28.850620 systemd-networkd[773]: eth0: Gained IPv6LL Sep 5 00:09:29.345254 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:09:29.905593 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 5 00:09:29.905593 ignition[953]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:09:29.909281 ignition[953]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:09:29.909281 ignition[953]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:09:29.909281 ignition[953]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:09:29.909281 ignition[953]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:09:29.909281 ignition[953]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:09:29.909281 ignition[953]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:09:29.909281 ignition[953]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:09:29.909281 ignition[953]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:09:29.928234 ignition[953]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:09:29.932890 ignition[953]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:09:29.934430 ignition[953]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:09:29.934430 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:09:29.934430 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:09:29.934430 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:09:29.934430 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:09:29.934430 ignition[953]: INFO : files: files passed Sep 5 00:09:29.934430 ignition[953]: INFO : Ignition finished successfully Sep 5 00:09:29.935802 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:09:29.943612 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:09:29.946148 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:09:29.948068 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:09:29.948179 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:09:29.955751 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:09:29.958269 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:09:29.958269 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:09:29.961459 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:09:29.964452 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:09:29.965881 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:09:29.975569 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:09:30.000752 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:09:30.000898 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:09:30.003106 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:09:30.005099 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:09:30.007070 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:09:30.007821 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:09:30.025345 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:09:30.034587 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:09:30.043342 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:09:30.044598 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:09:30.046758 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:09:30.048727 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:09:30.048838 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:09:30.050942 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:09:30.052610 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:09:30.054581 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:09:30.056581 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:09:30.058535 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:09:30.060636 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:09:30.062752 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:09:30.065010 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:09:30.066932 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:09:30.069059 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:09:30.070788 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:09:30.070903 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:09:30.073029 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:09:30.074606 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:09:30.076632 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:09:30.076730 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:09:30.078800 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:09:30.078907 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:09:30.081190 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:09:30.081299 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:09:30.083250 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:09:30.084938 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:09:30.089475 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:09:30.091652 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:09:30.093369 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:09:30.095317 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:09:30.095413 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:09:30.097689 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:09:30.097782 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:09:30.099483 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:09:30.099606 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:09:30.101521 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:09:30.101629 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:09:30.119579 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:09:30.121412 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:09:30.121565 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:09:30.125379 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:09:30.125669 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:09:30.125839 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:09:30.127890 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:09:30.128049 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:09:30.135208 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:09:30.135331 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:09:30.139049 ignition[1008]: INFO : Ignition 2.19.0 Sep 5 00:09:30.139049 ignition[1008]: INFO : Stage: umount Sep 5 00:09:30.139049 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:09:30.139049 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:09:30.139049 ignition[1008]: INFO : umount: umount passed Sep 5 00:09:30.139049 ignition[1008]: INFO : Ignition finished successfully Sep 5 00:09:30.138808 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:09:30.138917 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:09:30.140152 systemd[1]: Stopped target network.target - Network. Sep 5 00:09:30.142821 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:09:30.142888 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:09:30.143489 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:09:30.143546 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:09:30.143885 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:09:30.143930 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:09:30.150485 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:09:30.150549 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:09:30.152893 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:09:30.155137 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:09:30.159474 systemd-networkd[773]: eth0: DHCPv6 lease lost Sep 5 00:09:30.162858 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:09:30.163009 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:09:30.163711 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:09:30.163832 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:09:30.166800 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:09:30.166868 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:09:30.172623 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:09:30.173022 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:09:30.173080 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:09:30.173384 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:09:30.173452 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:09:30.173705 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:09:30.173751 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:09:30.174027 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:09:30.174070 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:09:30.181255 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:09:30.191842 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:09:30.191964 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:09:30.199155 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:09:30.199348 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:09:30.201680 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:09:30.201731 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:09:30.203054 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:09:30.203098 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:09:30.205066 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:09:30.205120 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:09:30.208738 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:09:30.208790 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:09:30.211666 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:09:30.211718 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:09:30.219549 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:09:30.219973 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:09:30.220030 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:09:30.222191 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:09:30.222242 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:09:30.227197 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:09:30.227315 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:09:30.246244 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:09:30.423910 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:09:30.424036 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:09:30.425000 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:09:30.428540 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:09:30.429526 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:09:30.447565 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:09:30.454614 systemd[1]: Switching root. Sep 5 00:09:30.489279 systemd-journald[192]: Journal stopped Sep 5 00:09:31.809191 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 5 00:09:31.809301 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:09:31.809327 kernel: SELinux: policy capability open_perms=1 Sep 5 00:09:31.809344 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:09:31.809356 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:09:31.809368 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:09:31.809387 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:09:31.809409 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:09:31.810381 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:09:31.810413 kernel: audit: type=1403 audit(1757030971.048:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:09:31.810453 systemd[1]: Successfully loaded SELinux policy in 39.502ms. Sep 5 00:09:31.810492 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.907ms. Sep 5 00:09:31.810505 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:09:31.810518 systemd[1]: Detected virtualization kvm. Sep 5 00:09:31.810531 systemd[1]: Detected architecture x86-64. Sep 5 00:09:31.810542 systemd[1]: Detected first boot. Sep 5 00:09:31.810555 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:09:31.810567 zram_generator::config[1054]: No configuration found. Sep 5 00:09:31.810580 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:09:31.810595 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:09:31.810607 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:09:31.810619 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:09:31.810632 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:09:31.810646 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:09:31.810657 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:09:31.810669 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:09:31.810681 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:09:31.810697 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:09:31.810709 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:09:31.810721 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:09:31.810734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:09:31.810746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:09:31.810758 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:09:31.810770 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:09:31.810783 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:09:31.810795 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:09:31.810809 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:09:31.810821 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:09:31.810833 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:09:31.810845 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:09:31.810857 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:09:31.810869 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:09:31.810887 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:09:31.810901 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:09:31.810915 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:09:31.810927 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:09:31.810939 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:09:31.810951 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:09:31.810963 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:09:31.810975 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:09:31.810987 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:09:31.810999 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:09:31.811010 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:09:31.811025 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:09:31.811037 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:09:31.811049 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:31.811061 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:09:31.811073 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:09:31.811085 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:09:31.811097 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:09:31.811112 systemd[1]: Reached target machines.target - Containers. Sep 5 00:09:31.811130 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:09:31.811146 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:09:31.811166 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:09:31.811188 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:09:31.811205 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:09:31.811220 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:09:31.811236 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:09:31.811251 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:09:31.811266 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:09:31.811285 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:09:31.811301 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:09:31.811316 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:09:31.811331 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:09:31.811344 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:09:31.811356 kernel: fuse: init (API version 7.39) Sep 5 00:09:31.811367 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:09:31.811380 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:09:31.811391 kernel: loop: module loaded Sep 5 00:09:31.811406 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:09:31.811442 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:09:31.811456 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:09:31.811476 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:09:31.811488 systemd[1]: Stopped verity-setup.service. Sep 5 00:09:31.811500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:31.811513 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:09:31.811548 systemd-journald[1121]: Collecting audit messages is disabled. Sep 5 00:09:31.811576 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:09:31.811589 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:09:31.811601 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:09:31.811613 kernel: ACPI: bus type drm_connector registered Sep 5 00:09:31.811627 systemd-journald[1121]: Journal started Sep 5 00:09:31.811650 systemd-journald[1121]: Runtime Journal (/run/log/journal/4393993edc344840b25ea2453363bd9a) is 6.0M, max 48.4M, 42.3M free. Sep 5 00:09:31.568291 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:09:31.586334 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:09:31.586829 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:09:31.815625 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:09:31.816859 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:09:31.818885 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:09:31.820285 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:09:31.821919 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:09:31.823484 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:09:31.823678 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:09:31.825184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:09:31.825374 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:09:31.826921 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:09:31.827113 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:09:31.828493 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:09:31.828675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:09:31.830375 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:09:31.830621 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:09:31.832031 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:09:31.832229 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:09:31.833710 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:09:31.835179 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:09:31.836771 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:09:31.855292 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:09:31.870536 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:09:31.873003 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:09:31.874190 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:09:31.874218 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:09:31.876339 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:09:31.879773 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:09:31.885604 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:09:31.886956 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:09:31.890243 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:09:31.898576 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:09:31.900051 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:09:31.901601 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:09:31.903017 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:09:31.906180 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:09:31.910099 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:09:31.917831 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:09:31.921286 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:09:31.924844 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:09:31.926620 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:09:31.941244 systemd-journald[1121]: Time spent on flushing to /var/log/journal/4393993edc344840b25ea2453363bd9a is 13.670ms for 953 entries. Sep 5 00:09:31.941244 systemd-journald[1121]: System Journal (/var/log/journal/4393993edc344840b25ea2453363bd9a) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:09:31.978284 systemd-journald[1121]: Received client request to flush runtime journal. Sep 5 00:09:31.978326 kernel: loop0: detected capacity change from 0 to 142488 Sep 5 00:09:31.953451 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:09:31.956264 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:09:31.967666 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:09:31.970349 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:09:31.983755 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:09:31.986516 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:09:31.994065 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:09:31.997447 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:09:32.009474 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 5 00:09:32.012372 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:09:32.013124 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:09:32.014722 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:09:32.021566 kernel: loop1: detected capacity change from 0 to 140768 Sep 5 00:09:32.026942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:09:32.064691 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Sep 5 00:09:32.064717 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Sep 5 00:09:32.065475 kernel: loop2: detected capacity change from 0 to 229808 Sep 5 00:09:32.074203 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:09:32.102485 kernel: loop3: detected capacity change from 0 to 142488 Sep 5 00:09:32.117471 kernel: loop4: detected capacity change from 0 to 140768 Sep 5 00:09:32.129475 kernel: loop5: detected capacity change from 0 to 229808 Sep 5 00:09:32.135041 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:09:32.136972 (sd-merge)[1192]: Merged extensions into '/usr'. Sep 5 00:09:32.208354 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:09:32.208591 systemd[1]: Reloading... Sep 5 00:09:32.287470 zram_generator::config[1221]: No configuration found. Sep 5 00:09:32.390766 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:09:32.555497 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:09:32.609106 systemd[1]: Reloading finished in 399 ms. Sep 5 00:09:32.639512 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:09:32.641246 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:09:32.656747 systemd[1]: Starting ensure-sysext.service... Sep 5 00:09:32.660686 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:09:32.666646 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:09:32.666664 systemd[1]: Reloading... Sep 5 00:09:32.707550 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:09:32.707941 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:09:32.708993 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:09:32.709330 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Sep 5 00:09:32.709411 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Sep 5 00:09:32.714688 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:09:32.714776 systemd-tmpfiles[1256]: Skipping /boot Sep 5 00:09:32.733793 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:09:32.733884 systemd-tmpfiles[1256]: Skipping /boot Sep 5 00:09:32.744032 zram_generator::config[1288]: No configuration found. Sep 5 00:09:32.904350 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:09:32.955291 systemd[1]: Reloading finished in 288 ms. Sep 5 00:09:32.975840 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:09:32.989966 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:09:32.999381 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:09:33.002109 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:09:33.004800 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:09:33.011321 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:09:33.018533 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:09:33.021147 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:09:33.027534 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:33.027980 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:09:33.031730 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:09:33.034800 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:09:33.039963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:09:33.041138 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:09:33.049618 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:09:33.050742 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:33.053976 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:09:33.054185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:09:33.055859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:09:33.056054 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:09:33.057965 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:09:33.058142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:09:33.065412 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:33.066719 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:09:33.075750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:09:33.078689 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:09:33.080893 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Sep 5 00:09:33.082649 augenrules[1350]: No rules Sep 5 00:09:33.082822 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:09:33.084033 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:09:33.084147 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:33.085623 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:09:33.088354 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:09:33.090603 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:09:33.092491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:09:33.092680 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:09:33.105143 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:09:33.107756 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:09:33.112914 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:09:33.113208 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:09:33.118009 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:09:33.118557 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:09:33.132022 systemd[1]: Finished ensure-sysext.service. Sep 5 00:09:33.136959 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:09:33.146843 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:33.147017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:09:33.156293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:09:33.158964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:09:33.160174 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:09:33.162968 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:09:33.164894 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:09:33.167529 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:09:33.172766 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:09:33.174228 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:09:33.174260 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:09:33.174830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:09:33.175027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:09:33.179723 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:09:33.193485 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:09:33.193950 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:09:33.202582 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:09:33.219622 systemd-resolved[1325]: Positive Trust Anchors: Sep 5 00:09:33.219649 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:09:33.219690 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:09:33.243966 systemd-resolved[1325]: Defaulting to hostname 'linux'. Sep 5 00:09:33.247294 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:09:33.249732 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:09:33.249769 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:09:33.269501 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1384) Sep 5 00:09:33.281313 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:09:33.301734 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:09:33.311179 systemd-networkd[1389]: lo: Link UP Sep 5 00:09:33.311209 systemd-networkd[1389]: lo: Gained carrier Sep 5 00:09:33.314189 systemd-networkd[1389]: Enumeration completed Sep 5 00:09:33.314335 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:09:33.315652 systemd[1]: Reached target network.target - Network. Sep 5 00:09:33.320185 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:09:33.320196 systemd-networkd[1389]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:09:33.323457 systemd-networkd[1389]: eth0: Link UP Sep 5 00:09:33.323481 systemd-networkd[1389]: eth0: Gained carrier Sep 5 00:09:33.323497 systemd-networkd[1389]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:09:33.365643 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:09:33.372452 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 5 00:09:33.376563 systemd-networkd[1389]: eth0: DHCPv4 address 10.0.0.52/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:09:33.378455 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:09:33.378791 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:09:33.378824 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 5 00:09:33.379033 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:09:33.381624 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Sep 5 00:09:34.652573 systemd-resolved[1325]: Clock change detected. Flushing caches. Sep 5 00:09:34.653137 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:09:34.653194 systemd-timesyncd[1391]: Initial clock synchronization to Fri 2025-09-05 00:09:34.652508 UTC. Sep 5 00:09:34.659371 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:09:34.666613 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:09:34.693463 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 5 00:09:34.713678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:09:34.715550 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:09:34.717454 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:09:34.784466 kernel: kvm_amd: TSC scaling supported Sep 5 00:09:34.784530 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:09:34.784544 kernel: kvm_amd: Nested Paging enabled Sep 5 00:09:34.784578 kernel: kvm_amd: LBR virtualization supported Sep 5 00:09:34.784592 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:09:34.785714 kernel: kvm_amd: Virtual GIF supported Sep 5 00:09:34.805457 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:09:34.840895 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:09:34.863676 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:09:34.865271 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:09:34.875299 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:09:34.912848 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:09:34.914437 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:09:34.915560 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:09:34.916784 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:09:34.918048 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:09:34.919563 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:09:34.920904 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:09:34.922198 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:09:34.923436 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:09:34.923475 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:09:34.924367 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:09:34.926122 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:09:34.929085 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:09:34.949065 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:09:34.951620 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:09:34.953243 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:09:34.954458 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:09:34.955599 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:09:34.956590 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:09:34.956622 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:09:34.957672 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:09:34.959854 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:09:34.964542 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:09:34.967716 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:09:34.970564 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:09:34.971760 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:09:34.972840 jq[1429]: false Sep 5 00:09:34.973709 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:09:34.986623 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:09:34.991362 extend-filesystems[1430]: Found loop3 Sep 5 00:09:34.991362 extend-filesystems[1430]: Found loop4 Sep 5 00:09:34.991362 extend-filesystems[1430]: Found loop5 Sep 5 00:09:34.991362 extend-filesystems[1430]: Found sr0 Sep 5 00:09:34.991362 extend-filesystems[1430]: Found vda Sep 5 00:09:34.991362 extend-filesystems[1430]: Found vda1 Sep 5 00:09:34.991362 extend-filesystems[1430]: Found vda2 Sep 5 00:09:34.991362 extend-filesystems[1430]: Found vda3 Sep 5 00:09:34.991362 extend-filesystems[1430]: Found usr Sep 5 00:09:34.991362 extend-filesystems[1430]: Found vda4 Sep 5 00:09:34.991362 extend-filesystems[1430]: Found vda6 Sep 5 00:09:34.991362 extend-filesystems[1430]: Found vda7 Sep 5 00:09:34.991362 extend-filesystems[1430]: Found vda9 Sep 5 00:09:34.991362 extend-filesystems[1430]: Checking size of /dev/vda9 Sep 5 00:09:34.997206 dbus-daemon[1428]: [system] SELinux support is enabled Sep 5 00:09:34.991452 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:09:34.999583 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:09:35.008834 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:09:35.010108 extend-filesystems[1430]: Resized partition /dev/vda9 Sep 5 00:09:35.010348 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:09:35.010851 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:09:35.013498 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:09:35.014906 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:09:35.017862 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:09:35.020262 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:09:35.025143 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:09:35.028929 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:09:35.029830 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:09:35.029145 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:09:35.029642 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:09:35.029966 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:09:35.034560 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:09:35.035821 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:09:35.039126 jq[1449]: true Sep 5 00:09:35.040448 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1384) Sep 5 00:09:35.063649 jq[1455]: true Sep 5 00:09:35.066075 update_engine[1447]: I20250905 00:09:35.065943 1447 main.cc:92] Flatcar Update Engine starting Sep 5 00:09:35.069815 update_engine[1447]: I20250905 00:09:35.069732 1447 update_check_scheduler.cc:74] Next update check in 8m8s Sep 5 00:09:35.080849 tar[1453]: linux-amd64/LICENSE Sep 5 00:09:35.081135 tar[1453]: linux-amd64/helm Sep 5 00:09:35.085078 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:09:35.088235 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Sep 5 00:09:35.088267 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:09:35.088864 systemd-logind[1443]: New seat seat0. Sep 5 00:09:35.092383 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:09:35.092519 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:09:35.095195 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:09:35.095232 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:09:35.096891 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:09:35.096912 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:09:35.108697 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:09:35.110017 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:09:35.115793 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:09:35.115793 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:09:35.115793 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:09:35.125831 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Sep 5 00:09:35.117659 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:09:35.117889 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:09:35.146227 sshd_keygen[1451]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:09:35.146666 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:09:35.149733 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:09:35.150760 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:09:35.153160 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:09:35.198400 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:09:35.204665 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:09:35.215562 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:09:35.215782 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:09:35.224702 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:09:35.429307 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:09:35.438733 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:09:35.441599 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:09:35.443285 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:09:35.674635 containerd[1460]: time="2025-09-05T00:09:35.674368168Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 00:09:35.716106 containerd[1460]: time="2025-09-05T00:09:35.715994006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:35.718952 containerd[1460]: time="2025-09-05T00:09:35.718893272Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:09:35.718952 containerd[1460]: time="2025-09-05T00:09:35.718935942Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:09:35.718952 containerd[1460]: time="2025-09-05T00:09:35.718954968Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:09:35.719341 containerd[1460]: time="2025-09-05T00:09:35.719299754Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:09:35.719341 containerd[1460]: time="2025-09-05T00:09:35.719328127Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:35.719491 containerd[1460]: time="2025-09-05T00:09:35.719447271Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:09:35.719491 containerd[1460]: time="2025-09-05T00:09:35.719480463Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:35.719834 containerd[1460]: time="2025-09-05T00:09:35.719796586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:09:35.719834 containerd[1460]: time="2025-09-05T00:09:35.719817535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:35.719937 containerd[1460]: time="2025-09-05T00:09:35.719836901Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:09:35.719937 containerd[1460]: time="2025-09-05T00:09:35.719854775Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:35.720058 containerd[1460]: time="2025-09-05T00:09:35.720009936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:35.720474 containerd[1460]: time="2025-09-05T00:09:35.720414595Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:09:35.720653 containerd[1460]: time="2025-09-05T00:09:35.720608127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:09:35.720653 containerd[1460]: time="2025-09-05T00:09:35.720643744Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:09:35.720890 containerd[1460]: time="2025-09-05T00:09:35.720847917Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:09:35.720957 containerd[1460]: time="2025-09-05T00:09:35.720940531Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:09:35.728013 containerd[1460]: time="2025-09-05T00:09:35.727935196Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:09:35.728117 containerd[1460]: time="2025-09-05T00:09:35.728071591Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:09:35.728184 containerd[1460]: time="2025-09-05T00:09:35.728118820Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:09:35.728184 containerd[1460]: time="2025-09-05T00:09:35.728156581Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:09:35.728264 containerd[1460]: time="2025-09-05T00:09:35.728204340Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:09:35.728522 containerd[1460]: time="2025-09-05T00:09:35.728496678Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:09:35.728900 containerd[1460]: time="2025-09-05T00:09:35.728875709Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:09:35.729059 containerd[1460]: time="2025-09-05T00:09:35.729036771Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:09:35.729100 containerd[1460]: time="2025-09-05T00:09:35.729069432Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:09:35.729125 containerd[1460]: time="2025-09-05T00:09:35.729105921Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:09:35.729143 containerd[1460]: time="2025-09-05T00:09:35.729126349Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:09:35.729162 containerd[1460]: time="2025-09-05T00:09:35.729142519Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:09:35.729162 containerd[1460]: time="2025-09-05T00:09:35.729157037Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:09:35.729209 containerd[1460]: time="2025-09-05T00:09:35.729171133Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:09:35.729209 containerd[1460]: time="2025-09-05T00:09:35.729185901Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:09:35.729209 containerd[1460]: time="2025-09-05T00:09:35.729199045Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:09:35.729271 containerd[1460]: time="2025-09-05T00:09:35.729211779Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:09:35.729271 containerd[1460]: time="2025-09-05T00:09:35.729224403Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:09:35.729271 containerd[1460]: time="2025-09-05T00:09:35.729252686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729351 containerd[1460]: time="2025-09-05T00:09:35.729280719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729351 containerd[1460]: time="2025-09-05T00:09:35.729309012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729351 containerd[1460]: time="2025-09-05T00:09:35.729324400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729351 containerd[1460]: time="2025-09-05T00:09:35.729336683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729351 containerd[1460]: time="2025-09-05T00:09:35.729349558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729682 containerd[1460]: time="2025-09-05T00:09:35.729361991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729682 containerd[1460]: time="2025-09-05T00:09:35.729393410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729682 containerd[1460]: time="2025-09-05T00:09:35.729469112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729682 containerd[1460]: time="2025-09-05T00:09:35.729510650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729682 containerd[1460]: time="2025-09-05T00:09:35.729537390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729682 containerd[1460]: time="2025-09-05T00:09:35.729556425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729682 containerd[1460]: time="2025-09-05T00:09:35.729583566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729682 containerd[1460]: time="2025-09-05T00:09:35.729603925Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:09:35.729682 containerd[1460]: time="2025-09-05T00:09:35.729630064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729682 containerd[1460]: time="2025-09-05T00:09:35.729643429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729682 containerd[1460]: time="2025-09-05T00:09:35.729655652Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:09:35.729931 containerd[1460]: time="2025-09-05T00:09:35.729721345Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:09:35.729931 containerd[1460]: time="2025-09-05T00:09:35.729753796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:09:35.729931 containerd[1460]: time="2025-09-05T00:09:35.729777791Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:09:35.729931 containerd[1460]: time="2025-09-05T00:09:35.729807316Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:09:35.729931 containerd[1460]: time="2025-09-05T00:09:35.729829528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.729931 containerd[1460]: time="2025-09-05T00:09:35.729847792Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:09:35.729931 containerd[1460]: time="2025-09-05T00:09:35.729871546Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:09:35.729931 containerd[1460]: time="2025-09-05T00:09:35.729883479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:09:35.730431 containerd[1460]: time="2025-09-05T00:09:35.730343631Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:09:35.730431 containerd[1460]: time="2025-09-05T00:09:35.730415707Z" level=info msg="Connect containerd service" Sep 5 00:09:35.730770 containerd[1460]: time="2025-09-05T00:09:35.730489585Z" level=info msg="using legacy CRI server" Sep 5 00:09:35.730770 containerd[1460]: time="2025-09-05T00:09:35.730501738Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:09:35.730770 containerd[1460]: time="2025-09-05T00:09:35.730657049Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:09:35.731638 containerd[1460]: time="2025-09-05T00:09:35.731600809Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:09:35.732562 containerd[1460]: time="2025-09-05T00:09:35.731786377Z" level=info msg="Start subscribing containerd event" Sep 5 00:09:35.732562 containerd[1460]: time="2025-09-05T00:09:35.731918104Z" level=info msg="Start recovering state" Sep 5 00:09:35.732562 containerd[1460]: time="2025-09-05T00:09:35.732133878Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:09:35.732562 containerd[1460]: time="2025-09-05T00:09:35.732218206Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:09:35.732662 containerd[1460]: time="2025-09-05T00:09:35.732628375Z" level=info msg="Start event monitor" Sep 5 00:09:35.732662 containerd[1460]: time="2025-09-05T00:09:35.732648313Z" level=info msg="Start snapshots syncer" Sep 5 00:09:35.732739 containerd[1460]: time="2025-09-05T00:09:35.732675243Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:09:35.732739 containerd[1460]: time="2025-09-05T00:09:35.732689009Z" level=info msg="Start streaming server" Sep 5 00:09:35.732841 containerd[1460]: time="2025-09-05T00:09:35.732814094Z" level=info msg="containerd successfully booted in 0.060344s" Sep 5 00:09:35.749592 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:09:35.806117 tar[1453]: linux-amd64/README.md Sep 5 00:09:35.830086 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:09:36.072727 systemd-networkd[1389]: eth0: Gained IPv6LL Sep 5 00:09:36.077000 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:09:36.078953 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:09:36.094641 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:09:36.097230 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:09:36.099385 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:09:36.119954 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:09:36.120203 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:09:36.122218 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:09:36.124716 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:09:37.665088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:09:37.666923 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:09:37.668879 systemd[1]: Startup finished in 993ms (kernel) + 6.304s (initrd) + 5.388s (userspace) = 12.686s. Sep 5 00:09:37.690854 (kubelet)[1541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:09:38.335312 kubelet[1541]: E0905 00:09:38.335206 1541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:09:38.340945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:09:38.341238 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:09:38.341846 systemd[1]: kubelet.service: Consumed 2.090s CPU time. Sep 5 00:09:39.075086 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:09:39.076591 systemd[1]: Started sshd@0-10.0.0.52:22-10.0.0.1:41210.service - OpenSSH per-connection server daemon (10.0.0.1:41210). Sep 5 00:09:39.137061 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 41210 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:09:39.139905 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:39.148948 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:09:39.167664 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:09:39.169579 systemd-logind[1443]: New session 1 of user core. Sep 5 00:09:39.182285 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:09:39.184376 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:09:39.203366 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:09:39.325283 systemd[1558]: Queued start job for default target default.target. Sep 5 00:09:39.335944 systemd[1558]: Created slice app.slice - User Application Slice. Sep 5 00:09:39.335979 systemd[1558]: Reached target paths.target - Paths. Sep 5 00:09:39.335999 systemd[1558]: Reached target timers.target - Timers. Sep 5 00:09:39.337857 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:09:39.350627 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:09:39.350814 systemd[1558]: Reached target sockets.target - Sockets. Sep 5 00:09:39.350835 systemd[1558]: Reached target basic.target - Basic System. Sep 5 00:09:39.350888 systemd[1558]: Reached target default.target - Main User Target. Sep 5 00:09:39.350928 systemd[1558]: Startup finished in 139ms. Sep 5 00:09:39.351271 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:09:39.353098 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:09:39.419980 systemd[1]: Started sshd@1-10.0.0.52:22-10.0.0.1:41222.service - OpenSSH per-connection server daemon (10.0.0.1:41222). Sep 5 00:09:39.455996 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 41222 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:09:39.457833 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:39.462445 systemd-logind[1443]: New session 2 of user core. Sep 5 00:09:39.472603 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:09:39.528397 sshd[1569]: pam_unix(sshd:session): session closed for user core Sep 5 00:09:39.542637 systemd[1]: sshd@1-10.0.0.52:22-10.0.0.1:41222.service: Deactivated successfully. Sep 5 00:09:39.544689 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:09:39.546568 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:09:39.556870 systemd[1]: Started sshd@2-10.0.0.52:22-10.0.0.1:41226.service - OpenSSH per-connection server daemon (10.0.0.1:41226). Sep 5 00:09:39.558548 systemd-logind[1443]: Removed session 2. Sep 5 00:09:39.590004 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 41226 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:09:39.592897 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:39.597665 systemd-logind[1443]: New session 3 of user core. Sep 5 00:09:39.612567 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:09:39.664012 sshd[1576]: pam_unix(sshd:session): session closed for user core Sep 5 00:09:39.673253 systemd[1]: sshd@2-10.0.0.52:22-10.0.0.1:41226.service: Deactivated successfully. Sep 5 00:09:39.675136 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:09:39.676914 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:09:39.678267 systemd[1]: Started sshd@3-10.0.0.52:22-10.0.0.1:41238.service - OpenSSH per-connection server daemon (10.0.0.1:41238). Sep 5 00:09:39.679178 systemd-logind[1443]: Removed session 3. Sep 5 00:09:39.714346 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 41238 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:09:39.716729 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:39.721473 systemd-logind[1443]: New session 4 of user core. Sep 5 00:09:39.737552 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:09:39.796955 sshd[1583]: pam_unix(sshd:session): session closed for user core Sep 5 00:09:39.806501 systemd[1]: sshd@3-10.0.0.52:22-10.0.0.1:41238.service: Deactivated successfully. Sep 5 00:09:39.808581 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:09:39.810155 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:09:39.818799 systemd[1]: Started sshd@4-10.0.0.52:22-10.0.0.1:41254.service - OpenSSH per-connection server daemon (10.0.0.1:41254). Sep 5 00:09:39.819764 systemd-logind[1443]: Removed session 4. Sep 5 00:09:39.848172 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 41254 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:09:39.850346 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:39.854653 systemd-logind[1443]: New session 5 of user core. Sep 5 00:09:39.864557 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:09:39.924812 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:09:39.925185 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:09:39.941330 sudo[1593]: pam_unix(sudo:session): session closed for user root Sep 5 00:09:39.943704 sshd[1590]: pam_unix(sshd:session): session closed for user core Sep 5 00:09:39.955810 systemd[1]: sshd@4-10.0.0.52:22-10.0.0.1:41254.service: Deactivated successfully. Sep 5 00:09:39.957916 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:09:39.959759 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:09:39.961352 systemd[1]: Started sshd@5-10.0.0.52:22-10.0.0.1:55348.service - OpenSSH per-connection server daemon (10.0.0.1:55348). Sep 5 00:09:39.962256 systemd-logind[1443]: Removed session 5. Sep 5 00:09:39.995129 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 55348 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:09:39.996865 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:40.001127 systemd-logind[1443]: New session 6 of user core. Sep 5 00:09:40.010550 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:09:40.065635 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:09:40.066002 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:09:40.070588 sudo[1602]: pam_unix(sudo:session): session closed for user root Sep 5 00:09:40.077460 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:09:40.077815 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:09:40.097680 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:09:40.099639 auditctl[1605]: No rules Sep 5 00:09:40.100845 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:09:40.101146 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:09:40.103069 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:09:40.143918 augenrules[1623]: No rules Sep 5 00:09:40.146007 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:09:40.147752 sudo[1601]: pam_unix(sudo:session): session closed for user root Sep 5 00:09:40.149905 sshd[1598]: pam_unix(sshd:session): session closed for user core Sep 5 00:09:40.165475 systemd[1]: sshd@5-10.0.0.52:22-10.0.0.1:55348.service: Deactivated successfully. Sep 5 00:09:40.167297 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:09:40.169102 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:09:40.176710 systemd[1]: Started sshd@6-10.0.0.52:22-10.0.0.1:55356.service - OpenSSH per-connection server daemon (10.0.0.1:55356). Sep 5 00:09:40.177743 systemd-logind[1443]: Removed session 6. Sep 5 00:09:40.209507 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 55356 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:09:40.211222 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:09:40.215094 systemd-logind[1443]: New session 7 of user core. Sep 5 00:09:40.224547 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:09:40.278469 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:09:40.278818 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:09:41.041653 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:09:41.041792 (dockerd)[1652]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:09:41.739210 dockerd[1652]: time="2025-09-05T00:09:41.739121639Z" level=info msg="Starting up" Sep 5 00:09:42.211725 systemd[1]: var-lib-docker-metacopy\x2dcheck226061037-merged.mount: Deactivated successfully. Sep 5 00:09:42.244023 dockerd[1652]: time="2025-09-05T00:09:42.243952987Z" level=info msg="Loading containers: start." Sep 5 00:09:42.417464 kernel: Initializing XFRM netlink socket Sep 5 00:09:42.509878 systemd-networkd[1389]: docker0: Link UP Sep 5 00:09:42.629255 dockerd[1652]: time="2025-09-05T00:09:42.629179697Z" level=info msg="Loading containers: done." Sep 5 00:09:42.750522 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3315790070-merged.mount: Deactivated successfully. Sep 5 00:09:42.794459 dockerd[1652]: time="2025-09-05T00:09:42.794267998Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:09:42.794948 dockerd[1652]: time="2025-09-05T00:09:42.794466340Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:09:42.794948 dockerd[1652]: time="2025-09-05T00:09:42.794654152Z" level=info msg="Daemon has completed initialization" Sep 5 00:09:42.852638 dockerd[1652]: time="2025-09-05T00:09:42.852522585Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:09:42.852848 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:09:43.813442 containerd[1460]: time="2025-09-05T00:09:43.813350898Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 00:09:44.513805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4098273089.mount: Deactivated successfully. Sep 5 00:09:45.627576 containerd[1460]: time="2025-09-05T00:09:45.627494427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:45.628172 containerd[1460]: time="2025-09-05T00:09:45.628094752Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 5 00:09:45.629273 containerd[1460]: time="2025-09-05T00:09:45.629234129Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:45.632032 containerd[1460]: time="2025-09-05T00:09:45.632001257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:45.633126 containerd[1460]: time="2025-09-05T00:09:45.633096180Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 1.81969063s" Sep 5 00:09:45.633126 containerd[1460]: time="2025-09-05T00:09:45.633129763Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 5 00:09:45.633902 containerd[1460]: time="2025-09-05T00:09:45.633878377Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 00:09:47.665302 containerd[1460]: time="2025-09-05T00:09:47.665228658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:47.666235 containerd[1460]: time="2025-09-05T00:09:47.666147531Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 5 00:09:47.667391 containerd[1460]: time="2025-09-05T00:09:47.667350576Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:47.670399 containerd[1460]: time="2025-09-05T00:09:47.670357785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:47.671726 containerd[1460]: time="2025-09-05T00:09:47.671668382Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 2.037757314s" Sep 5 00:09:47.671726 containerd[1460]: time="2025-09-05T00:09:47.671716502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 5 00:09:47.672782 containerd[1460]: time="2025-09-05T00:09:47.672750611Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 00:09:48.591505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:09:48.606621 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:09:48.853266 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:09:48.859081 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:09:48.906684 kubelet[1866]: E0905 00:09:48.906545 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:09:48.914351 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:09:48.914702 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:09:51.161050 containerd[1460]: time="2025-09-05T00:09:51.160971134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:51.161769 containerd[1460]: time="2025-09-05T00:09:51.161668852Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 5 00:09:51.163016 containerd[1460]: time="2025-09-05T00:09:51.162952980Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:51.166315 containerd[1460]: time="2025-09-05T00:09:51.166229133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:51.167416 containerd[1460]: time="2025-09-05T00:09:51.167372136Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 3.494584536s" Sep 5 00:09:51.167416 containerd[1460]: time="2025-09-05T00:09:51.167412121Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 5 00:09:51.168130 containerd[1460]: time="2025-09-05T00:09:51.168088539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 00:09:52.255724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028148730.mount: Deactivated successfully. Sep 5 00:09:54.032302 containerd[1460]: time="2025-09-05T00:09:54.032228491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:54.036051 containerd[1460]: time="2025-09-05T00:09:54.035999752Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 5 00:09:54.061751 containerd[1460]: time="2025-09-05T00:09:54.061685112Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:54.153500 containerd[1460]: time="2025-09-05T00:09:54.153396831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:54.154179 containerd[1460]: time="2025-09-05T00:09:54.154144243Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 2.986020898s" Sep 5 00:09:54.154223 containerd[1460]: time="2025-09-05T00:09:54.154178096Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 5 00:09:54.155046 containerd[1460]: time="2025-09-05T00:09:54.154996200Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 00:09:55.099655 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3895785610.mount: Deactivated successfully. Sep 5 00:09:55.817346 containerd[1460]: time="2025-09-05T00:09:55.817283619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:55.818072 containerd[1460]: time="2025-09-05T00:09:55.818003518Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 5 00:09:55.819250 containerd[1460]: time="2025-09-05T00:09:55.819210992Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:55.822158 containerd[1460]: time="2025-09-05T00:09:55.822123854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:55.823441 containerd[1460]: time="2025-09-05T00:09:55.823379979Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.668341019s" Sep 5 00:09:55.823441 containerd[1460]: time="2025-09-05T00:09:55.823416287Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 5 00:09:55.824116 containerd[1460]: time="2025-09-05T00:09:55.823913941Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:09:56.480862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3377029135.mount: Deactivated successfully. Sep 5 00:09:56.486729 containerd[1460]: time="2025-09-05T00:09:56.486682421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:56.487480 containerd[1460]: time="2025-09-05T00:09:56.487398334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:09:56.488509 containerd[1460]: time="2025-09-05T00:09:56.488476185Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:56.490554 containerd[1460]: time="2025-09-05T00:09:56.490517001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:56.491191 containerd[1460]: time="2025-09-05T00:09:56.491168733Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 667.218564ms" Sep 5 00:09:56.491251 containerd[1460]: time="2025-09-05T00:09:56.491197237Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:09:56.491736 containerd[1460]: time="2025-09-05T00:09:56.491713785Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 00:09:57.086884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346700654.mount: Deactivated successfully. Sep 5 00:09:59.165175 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:09:59.178893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:09:59.519240 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:09:59.524856 (kubelet)[2008]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:09:59.734687 kubelet[2008]: E0905 00:09:59.734612 2008 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:09:59.739072 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:09:59.739298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:09:59.853473 containerd[1460]: time="2025-09-05T00:09:59.853339953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:59.856096 containerd[1460]: time="2025-09-05T00:09:59.856047009Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 5 00:09:59.857301 containerd[1460]: time="2025-09-05T00:09:59.857263159Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:59.860771 containerd[1460]: time="2025-09-05T00:09:59.860735841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:09:59.861964 containerd[1460]: time="2025-09-05T00:09:59.861918097Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.370173615s" Sep 5 00:09:59.862027 containerd[1460]: time="2025-09-05T00:09:59.861965596Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 5 00:10:03.367634 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:10:03.378674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:10:03.403110 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit session-7.scope)... Sep 5 00:10:03.403128 systemd[1]: Reloading... Sep 5 00:10:03.486865 zram_generator::config[2094]: No configuration found. Sep 5 00:10:03.675636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:10:03.754989 systemd[1]: Reloading finished in 351 ms. Sep 5 00:10:03.807816 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:10:03.811224 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:10:03.811535 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:10:03.813379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:10:03.978546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:10:03.984344 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:10:04.031782 kubelet[2135]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:10:04.031782 kubelet[2135]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:10:04.031782 kubelet[2135]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:10:04.032201 kubelet[2135]: I0905 00:10:04.031839 2135 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:10:04.384242 kubelet[2135]: I0905 00:10:04.384137 2135 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:10:04.384242 kubelet[2135]: I0905 00:10:04.384164 2135 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:10:04.384479 kubelet[2135]: I0905 00:10:04.384459 2135 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:10:04.410784 kubelet[2135]: I0905 00:10:04.410726 2135 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:10:04.411014 kubelet[2135]: E0905 00:10:04.410945 2135 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:10:04.423218 kubelet[2135]: E0905 00:10:04.423164 2135 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:10:04.423218 kubelet[2135]: I0905 00:10:04.423200 2135 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:10:04.430571 kubelet[2135]: I0905 00:10:04.430550 2135 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:10:04.430948 kubelet[2135]: I0905 00:10:04.430895 2135 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:10:04.431177 kubelet[2135]: I0905 00:10:04.430933 2135 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:10:04.431379 kubelet[2135]: I0905 00:10:04.431189 2135 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:10:04.431379 kubelet[2135]: I0905 00:10:04.431208 2135 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:10:04.432210 kubelet[2135]: I0905 00:10:04.432174 2135 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:10:04.435229 kubelet[2135]: I0905 00:10:04.435191 2135 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:10:04.435323 kubelet[2135]: I0905 00:10:04.435289 2135 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:10:04.435390 kubelet[2135]: I0905 00:10:04.435369 2135 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:10:04.438320 kubelet[2135]: I0905 00:10:04.438209 2135 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:10:04.447874 kubelet[2135]: E0905 00:10:04.447582 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:10:04.447874 kubelet[2135]: I0905 00:10:04.447741 2135 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:10:04.449714 kubelet[2135]: I0905 00:10:04.449210 2135 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:10:04.451011 kubelet[2135]: W0905 00:10:04.450088 2135 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:10:04.453259 kubelet[2135]: E0905 00:10:04.451979 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:10:04.456377 kubelet[2135]: I0905 00:10:04.456346 2135 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:10:04.456466 kubelet[2135]: I0905 00:10:04.456453 2135 server.go:1289] "Started kubelet" Sep 5 00:10:04.457210 kubelet[2135]: I0905 00:10:04.456746 2135 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:10:04.458446 kubelet[2135]: I0905 00:10:04.458397 2135 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:10:04.458517 kubelet[2135]: I0905 00:10:04.458494 2135 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:10:04.459453 kubelet[2135]: I0905 00:10:04.459409 2135 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:10:04.460339 kubelet[2135]: I0905 00:10:04.460313 2135 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:10:04.462439 kubelet[2135]: E0905 00:10:04.460510 2135 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:10:04.462439 kubelet[2135]: I0905 00:10:04.460658 2135 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:10:04.463213 kubelet[2135]: E0905 00:10:04.463186 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:04.463266 kubelet[2135]: I0905 00:10:04.463231 2135 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:10:04.463525 kubelet[2135]: I0905 00:10:04.463498 2135 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:10:04.463620 kubelet[2135]: I0905 00:10:04.463603 2135 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:10:04.463724 kubelet[2135]: E0905 00:10:04.462644 2135 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.52:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.52:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623a63aff865d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:10:04.456379862 +0000 UTC m=+0.465224978,LastTimestamp:2025-09-05 00:10:04.456379862 +0000 UTC m=+0.465224978,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:10:04.464922 kubelet[2135]: E0905 00:10:04.464854 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="200ms" Sep 5 00:10:04.464976 kubelet[2135]: I0905 00:10:04.464932 2135 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:10:04.465075 kubelet[2135]: I0905 00:10:04.465050 2135 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:10:04.465399 kubelet[2135]: E0905 00:10:04.465371 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:10:04.466210 kubelet[2135]: I0905 00:10:04.466191 2135 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:10:04.489338 kubelet[2135]: I0905 00:10:04.489245 2135 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:10:04.489338 kubelet[2135]: I0905 00:10:04.489269 2135 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:10:04.489338 kubelet[2135]: I0905 00:10:04.489290 2135 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:10:04.489673 kubelet[2135]: I0905 00:10:04.489631 2135 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:10:04.492752 kubelet[2135]: I0905 00:10:04.492476 2135 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:10:04.492752 kubelet[2135]: I0905 00:10:04.492502 2135 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:10:04.492752 kubelet[2135]: I0905 00:10:04.492525 2135 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:10:04.492752 kubelet[2135]: I0905 00:10:04.492538 2135 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:10:04.492752 kubelet[2135]: E0905 00:10:04.492583 2135 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:10:04.493334 kubelet[2135]: E0905 00:10:04.493311 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:10:04.563358 kubelet[2135]: E0905 00:10:04.563316 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:04.593641 kubelet[2135]: E0905 00:10:04.593612 2135 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:10:04.663922 kubelet[2135]: E0905 00:10:04.663880 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:04.665500 kubelet[2135]: E0905 00:10:04.665470 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="400ms" Sep 5 00:10:04.764684 kubelet[2135]: E0905 00:10:04.764607 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:04.794023 kubelet[2135]: E0905 00:10:04.793995 2135 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:10:04.865392 kubelet[2135]: E0905 00:10:04.865346 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:04.966518 kubelet[2135]: E0905 00:10:04.966355 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:05.149596 kubelet[2135]: E0905 00:10:05.066158 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="800ms" Sep 5 00:10:05.149596 kubelet[2135]: E0905 00:10:05.067181 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:05.156207 kubelet[2135]: I0905 00:10:05.156184 2135 policy_none.go:49] "None policy: Start" Sep 5 00:10:05.156284 kubelet[2135]: I0905 00:10:05.156230 2135 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:10:05.156284 kubelet[2135]: I0905 00:10:05.156266 2135 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:10:05.165989 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:10:05.167800 kubelet[2135]: E0905 00:10:05.167775 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:05.188006 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:10:05.191855 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:10:05.194936 kubelet[2135]: E0905 00:10:05.194903 2135 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:10:05.203366 kubelet[2135]: E0905 00:10:05.203338 2135 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:10:05.203808 kubelet[2135]: I0905 00:10:05.203671 2135 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:10:05.203808 kubelet[2135]: I0905 00:10:05.203704 2135 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:10:05.204393 kubelet[2135]: I0905 00:10:05.204008 2135 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:10:05.204985 kubelet[2135]: E0905 00:10:05.204959 2135 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:10:05.205057 kubelet[2135]: E0905 00:10:05.205018 2135 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:10:05.306010 kubelet[2135]: I0905 00:10:05.305901 2135 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:10:05.306399 kubelet[2135]: E0905 00:10:05.306334 2135 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Sep 5 00:10:05.454152 kubelet[2135]: E0905 00:10:05.454062 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.52:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 00:10:05.508940 kubelet[2135]: I0905 00:10:05.508893 2135 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:10:05.509317 kubelet[2135]: E0905 00:10:05.509270 2135 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Sep 5 00:10:05.543054 kubelet[2135]: E0905 00:10:05.542980 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:10:05.648663 kubelet[2135]: E0905 00:10:05.648467 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:10:05.867901 kubelet[2135]: E0905 00:10:05.867815 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="1.6s" Sep 5 00:10:05.911605 kubelet[2135]: I0905 00:10:05.911466 2135 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:10:05.911836 kubelet[2135]: E0905 00:10:05.911806 2135 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Sep 5 00:10:05.993417 kubelet[2135]: E0905 00:10:05.993354 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.52:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 00:10:06.005956 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 5 00:10:06.020112 kubelet[2135]: E0905 00:10:06.020061 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:10:06.023502 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 5 00:10:06.025787 kubelet[2135]: E0905 00:10:06.025739 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:10:06.027800 systemd[1]: Created slice kubepods-burstable-pod50092c729b6bc407603dcfacd9a9478b.slice - libcontainer container kubepods-burstable-pod50092c729b6bc407603dcfacd9a9478b.slice. Sep 5 00:10:06.029484 kubelet[2135]: E0905 00:10:06.029451 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:10:06.073786 kubelet[2135]: I0905 00:10:06.073746 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50092c729b6bc407603dcfacd9a9478b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"50092c729b6bc407603dcfacd9a9478b\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:10:06.073877 kubelet[2135]: I0905 00:10:06.073788 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50092c729b6bc407603dcfacd9a9478b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"50092c729b6bc407603dcfacd9a9478b\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:10:06.073877 kubelet[2135]: I0905 00:10:06.073812 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:10:06.073877 kubelet[2135]: I0905 00:10:06.073827 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:10:06.073877 kubelet[2135]: I0905 00:10:06.073851 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:10:06.074021 kubelet[2135]: I0905 00:10:06.073894 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:10:06.074021 kubelet[2135]: I0905 00:10:06.073932 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:10:06.074021 kubelet[2135]: I0905 00:10:06.073948 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50092c729b6bc407603dcfacd9a9478b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"50092c729b6bc407603dcfacd9a9478b\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:10:06.074021 kubelet[2135]: I0905 00:10:06.073966 2135 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:10:06.321166 kubelet[2135]: E0905 00:10:06.321103 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:06.322077 containerd[1460]: time="2025-09-05T00:10:06.322032580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 5 00:10:06.327134 kubelet[2135]: E0905 00:10:06.327097 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:06.327492 containerd[1460]: time="2025-09-05T00:10:06.327458394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 5 00:10:06.329923 kubelet[2135]: E0905 00:10:06.329887 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:06.330438 containerd[1460]: time="2025-09-05T00:10:06.330395641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:50092c729b6bc407603dcfacd9a9478b,Namespace:kube-system,Attempt:0,}" Sep 5 00:10:06.611311 kubelet[2135]: E0905 00:10:06.611170 2135 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.52:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 00:10:06.713192 kubelet[2135]: I0905 00:10:06.713155 2135 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:10:06.713584 kubelet[2135]: E0905 00:10:06.713553 2135 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.52:6443/api/v1/nodes\": dial tcp 10.0.0.52:6443: connect: connection refused" node="localhost" Sep 5 00:10:06.867404 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2954139491.mount: Deactivated successfully. Sep 5 00:10:06.873689 containerd[1460]: time="2025-09-05T00:10:06.873646226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:10:06.875503 containerd[1460]: time="2025-09-05T00:10:06.875444798Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:10:06.876396 containerd[1460]: time="2025-09-05T00:10:06.876361607Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:10:06.877377 containerd[1460]: time="2025-09-05T00:10:06.877349199Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:10:06.878294 containerd[1460]: time="2025-09-05T00:10:06.878214832Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 5 00:10:06.879093 containerd[1460]: time="2025-09-05T00:10:06.879058844Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:10:06.879767 containerd[1460]: time="2025-09-05T00:10:06.879722248Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:10:06.885005 containerd[1460]: time="2025-09-05T00:10:06.884931926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:10:06.886292 containerd[1460]: time="2025-09-05T00:10:06.886242934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 555.750522ms" Sep 5 00:10:06.886556 containerd[1460]: time="2025-09-05T00:10:06.886523991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 558.99726ms" Sep 5 00:10:06.887590 containerd[1460]: time="2025-09-05T00:10:06.887551067Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.3871ms" Sep 5 00:10:07.135343 containerd[1460]: time="2025-09-05T00:10:07.134987955Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:10:07.135343 containerd[1460]: time="2025-09-05T00:10:07.135051547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:10:07.135343 containerd[1460]: time="2025-09-05T00:10:07.135062198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:07.135343 containerd[1460]: time="2025-09-05T00:10:07.135194994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:07.136357 containerd[1460]: time="2025-09-05T00:10:07.134740489Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:10:07.136357 containerd[1460]: time="2025-09-05T00:10:07.136263300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:10:07.136357 containerd[1460]: time="2025-09-05T00:10:07.136274983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:07.137732 containerd[1460]: time="2025-09-05T00:10:07.136991362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:10:07.137732 containerd[1460]: time="2025-09-05T00:10:07.137046327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:10:07.137732 containerd[1460]: time="2025-09-05T00:10:07.137076726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:07.137732 containerd[1460]: time="2025-09-05T00:10:07.137242064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:07.137732 containerd[1460]: time="2025-09-05T00:10:07.137492006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:07.170588 systemd[1]: Started cri-containerd-dddc10fa59399cd4426d2ad95d91f185f8d023c280142c569c09a72afc87a1c3.scope - libcontainer container dddc10fa59399cd4426d2ad95d91f185f8d023c280142c569c09a72afc87a1c3. Sep 5 00:10:07.175205 systemd[1]: Started cri-containerd-2c762f24b43cd6e16329356fdbcc24828c3fe366977a5a8a6228eaf6da3198f6.scope - libcontainer container 2c762f24b43cd6e16329356fdbcc24828c3fe366977a5a8a6228eaf6da3198f6. Sep 5 00:10:07.177626 systemd[1]: Started cri-containerd-6695bf42f0030f9e1e60483ce6cd5bf6c2e0ec559106079bba414d6a469f8526.scope - libcontainer container 6695bf42f0030f9e1e60483ce6cd5bf6c2e0ec559106079bba414d6a469f8526. Sep 5 00:10:07.320836 containerd[1460]: time="2025-09-05T00:10:07.320762646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c762f24b43cd6e16329356fdbcc24828c3fe366977a5a8a6228eaf6da3198f6\"" Sep 5 00:10:07.322011 kubelet[2135]: E0905 00:10:07.321977 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:07.325066 containerd[1460]: time="2025-09-05T00:10:07.325016124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:50092c729b6bc407603dcfacd9a9478b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dddc10fa59399cd4426d2ad95d91f185f8d023c280142c569c09a72afc87a1c3\"" Sep 5 00:10:07.326003 kubelet[2135]: E0905 00:10:07.325973 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:07.330669 containerd[1460]: time="2025-09-05T00:10:07.330628277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6695bf42f0030f9e1e60483ce6cd5bf6c2e0ec559106079bba414d6a469f8526\"" Sep 5 00:10:07.331472 kubelet[2135]: E0905 00:10:07.331445 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:07.388238 containerd[1460]: time="2025-09-05T00:10:07.388057336Z" level=info msg="CreateContainer within sandbox \"2c762f24b43cd6e16329356fdbcc24828c3fe366977a5a8a6228eaf6da3198f6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:10:07.390501 containerd[1460]: time="2025-09-05T00:10:07.390463538Z" level=info msg="CreateContainer within sandbox \"dddc10fa59399cd4426d2ad95d91f185f8d023c280142c569c09a72afc87a1c3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:10:07.393750 containerd[1460]: time="2025-09-05T00:10:07.393558887Z" level=info msg="CreateContainer within sandbox \"6695bf42f0030f9e1e60483ce6cd5bf6c2e0ec559106079bba414d6a469f8526\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:10:07.400715 kubelet[2135]: E0905 00:10:07.400660 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.52:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 00:10:07.410739 containerd[1460]: time="2025-09-05T00:10:07.410411156Z" level=info msg="CreateContainer within sandbox \"2c762f24b43cd6e16329356fdbcc24828c3fe366977a5a8a6228eaf6da3198f6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2f6f1d130fa95a0c0a5db2ed3237c6385f49e5911898eae011dbd8dcbec798f2\"" Sep 5 00:10:07.411923 containerd[1460]: time="2025-09-05T00:10:07.411882890Z" level=info msg="StartContainer for \"2f6f1d130fa95a0c0a5db2ed3237c6385f49e5911898eae011dbd8dcbec798f2\"" Sep 5 00:10:07.468997 kubelet[2135]: E0905 00:10:07.468924 2135 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.52:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.52:6443: connect: connection refused" interval="3.2s" Sep 5 00:10:07.472729 systemd[1]: Started cri-containerd-2f6f1d130fa95a0c0a5db2ed3237c6385f49e5911898eae011dbd8dcbec798f2.scope - libcontainer container 2f6f1d130fa95a0c0a5db2ed3237c6385f49e5911898eae011dbd8dcbec798f2. Sep 5 00:10:07.480652 containerd[1460]: time="2025-09-05T00:10:07.480605970Z" level=info msg="CreateContainer within sandbox \"6695bf42f0030f9e1e60483ce6cd5bf6c2e0ec559106079bba414d6a469f8526\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b4767448a85998e09568f4006e3be165f801198fd8bc7bf346d0b4969f7c88f3\"" Sep 5 00:10:07.481402 containerd[1460]: time="2025-09-05T00:10:07.481351465Z" level=info msg="StartContainer for \"b4767448a85998e09568f4006e3be165f801198fd8bc7bf346d0b4969f7c88f3\"" Sep 5 00:10:07.483632 containerd[1460]: time="2025-09-05T00:10:07.482902561Z" level=info msg="CreateContainer within sandbox \"dddc10fa59399cd4426d2ad95d91f185f8d023c280142c569c09a72afc87a1c3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c6b03dced7e856039997a0dedc8acce74299c264719809bf5211de6d242b6b8c\"" Sep 5 00:10:07.484128 containerd[1460]: time="2025-09-05T00:10:07.484072152Z" level=info msg="StartContainer for \"c6b03dced7e856039997a0dedc8acce74299c264719809bf5211de6d242b6b8c\"" Sep 5 00:10:07.518184 systemd[1]: Started cri-containerd-b4767448a85998e09568f4006e3be165f801198fd8bc7bf346d0b4969f7c88f3.scope - libcontainer container b4767448a85998e09568f4006e3be165f801198fd8bc7bf346d0b4969f7c88f3. Sep 5 00:10:07.529115 containerd[1460]: time="2025-09-05T00:10:07.529070603Z" level=info msg="StartContainer for \"2f6f1d130fa95a0c0a5db2ed3237c6385f49e5911898eae011dbd8dcbec798f2\" returns successfully" Sep 5 00:10:07.529659 systemd[1]: Started cri-containerd-c6b03dced7e856039997a0dedc8acce74299c264719809bf5211de6d242b6b8c.scope - libcontainer container c6b03dced7e856039997a0dedc8acce74299c264719809bf5211de6d242b6b8c. Sep 5 00:10:07.576490 containerd[1460]: time="2025-09-05T00:10:07.576366746Z" level=info msg="StartContainer for \"b4767448a85998e09568f4006e3be165f801198fd8bc7bf346d0b4969f7c88f3\" returns successfully" Sep 5 00:10:07.582543 kubelet[2135]: E0905 00:10:07.582416 2135 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.52:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.52:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 00:10:07.583717 containerd[1460]: time="2025-09-05T00:10:07.583667691Z" level=info msg="StartContainer for \"c6b03dced7e856039997a0dedc8acce74299c264719809bf5211de6d242b6b8c\" returns successfully" Sep 5 00:10:08.316617 kubelet[2135]: I0905 00:10:08.316512 2135 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:10:08.510724 kubelet[2135]: E0905 00:10:08.510668 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:10:08.512498 kubelet[2135]: E0905 00:10:08.512015 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:08.512904 kubelet[2135]: E0905 00:10:08.512883 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:10:08.513132 kubelet[2135]: E0905 00:10:08.513081 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:08.513885 kubelet[2135]: E0905 00:10:08.513871 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:10:08.513963 kubelet[2135]: E0905 00:10:08.513950 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:09.220915 kubelet[2135]: I0905 00:10:09.220853 2135 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:10:09.220915 kubelet[2135]: E0905 00:10:09.220907 2135 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:10:09.243897 kubelet[2135]: E0905 00:10:09.243836 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:09.344950 kubelet[2135]: E0905 00:10:09.344884 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:09.445730 kubelet[2135]: E0905 00:10:09.445651 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:09.517003 kubelet[2135]: E0905 00:10:09.516858 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:10:09.517003 kubelet[2135]: E0905 00:10:09.516987 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:10:09.517003 kubelet[2135]: E0905 00:10:09.516988 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:09.517521 kubelet[2135]: E0905 00:10:09.517075 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:10:09.517521 kubelet[2135]: E0905 00:10:09.517092 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:09.517521 kubelet[2135]: E0905 00:10:09.517216 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:09.546695 kubelet[2135]: E0905 00:10:09.546643 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:09.647305 kubelet[2135]: E0905 00:10:09.647249 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:09.747845 kubelet[2135]: E0905 00:10:09.747792 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:09.848510 kubelet[2135]: E0905 00:10:09.848375 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:09.949392 kubelet[2135]: E0905 00:10:09.949337 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:10.050415 kubelet[2135]: E0905 00:10:10.050346 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:10.151070 kubelet[2135]: E0905 00:10:10.150879 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:10.251497 kubelet[2135]: E0905 00:10:10.251407 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:10.352534 kubelet[2135]: E0905 00:10:10.352490 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:10.452658 kubelet[2135]: E0905 00:10:10.452596 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:10.518919 kubelet[2135]: E0905 00:10:10.518689 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:10:10.518919 kubelet[2135]: E0905 00:10:10.518767 2135 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:10:10.518919 kubelet[2135]: E0905 00:10:10.518845 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:10.518919 kubelet[2135]: E0905 00:10:10.518854 2135 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:10.553315 kubelet[2135]: E0905 00:10:10.553263 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:10.653940 kubelet[2135]: E0905 00:10:10.653880 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:10.754623 kubelet[2135]: E0905 00:10:10.754488 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:10.855240 kubelet[2135]: E0905 00:10:10.855202 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:10.928643 systemd[1]: Reloading requested from client PID 2425 ('systemctl') (unit session-7.scope)... Sep 5 00:10:10.928659 systemd[1]: Reloading... Sep 5 00:10:10.956449 kubelet[2135]: E0905 00:10:10.955464 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:10.999479 zram_generator::config[2467]: No configuration found. Sep 5 00:10:11.056489 kubelet[2135]: E0905 00:10:11.056324 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:11.110361 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:10:11.156974 kubelet[2135]: E0905 00:10:11.156932 2135 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:10:11.202260 systemd[1]: Reloading finished in 273 ms. Sep 5 00:10:11.250958 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:10:11.278926 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:10:11.279250 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:10:11.279298 systemd[1]: kubelet.service: Consumed 1.021s CPU time, 131.4M memory peak, 0B memory swap peak. Sep 5 00:10:11.286753 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:10:11.451805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:10:11.459735 (kubelet)[2509]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:10:11.502043 kubelet[2509]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:10:11.502043 kubelet[2509]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:10:11.502043 kubelet[2509]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:10:11.502484 kubelet[2509]: I0905 00:10:11.502104 2509 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:10:11.513323 kubelet[2509]: I0905 00:10:11.513285 2509 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 00:10:11.513323 kubelet[2509]: I0905 00:10:11.513307 2509 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:10:11.513548 kubelet[2509]: I0905 00:10:11.513523 2509 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 00:10:11.514880 kubelet[2509]: I0905 00:10:11.514855 2509 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 00:10:11.517767 kubelet[2509]: I0905 00:10:11.517657 2509 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:10:11.521582 kubelet[2509]: E0905 00:10:11.521547 2509 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:10:11.521582 kubelet[2509]: I0905 00:10:11.521586 2509 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:10:11.528391 kubelet[2509]: I0905 00:10:11.528351 2509 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:10:11.528791 kubelet[2509]: I0905 00:10:11.528742 2509 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:10:11.529136 kubelet[2509]: I0905 00:10:11.528794 2509 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:10:11.529217 kubelet[2509]: I0905 00:10:11.529179 2509 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:10:11.529217 kubelet[2509]: I0905 00:10:11.529194 2509 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 00:10:11.529377 kubelet[2509]: I0905 00:10:11.529357 2509 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:10:11.529833 kubelet[2509]: I0905 00:10:11.529811 2509 kubelet.go:480] "Attempting to sync node with API server" Sep 5 00:10:11.529833 kubelet[2509]: I0905 00:10:11.529832 2509 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:10:11.529896 kubelet[2509]: I0905 00:10:11.529863 2509 kubelet.go:386] "Adding apiserver pod source" Sep 5 00:10:11.529896 kubelet[2509]: I0905 00:10:11.529883 2509 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:10:11.530881 kubelet[2509]: I0905 00:10:11.530838 2509 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:10:11.532930 kubelet[2509]: I0905 00:10:11.531278 2509 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 00:10:11.538519 kubelet[2509]: I0905 00:10:11.538486 2509 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:10:11.538594 kubelet[2509]: I0905 00:10:11.538547 2509 server.go:1289] "Started kubelet" Sep 5 00:10:11.539346 kubelet[2509]: I0905 00:10:11.538950 2509 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:10:11.539807 kubelet[2509]: I0905 00:10:11.539793 2509 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:10:11.541231 kubelet[2509]: I0905 00:10:11.541216 2509 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:10:11.541730 kubelet[2509]: I0905 00:10:11.541678 2509 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:10:11.543010 kubelet[2509]: I0905 00:10:11.542974 2509 server.go:317] "Adding debug handlers to kubelet server" Sep 5 00:10:11.543766 kubelet[2509]: I0905 00:10:11.543738 2509 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:10:11.543814 kubelet[2509]: I0905 00:10:11.543765 2509 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:10:11.543946 kubelet[2509]: I0905 00:10:11.543917 2509 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:10:11.544151 kubelet[2509]: I0905 00:10:11.544128 2509 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:10:11.544710 kubelet[2509]: I0905 00:10:11.544676 2509 factory.go:223] Registration of the systemd container factory successfully Sep 5 00:10:11.544842 kubelet[2509]: I0905 00:10:11.544800 2509 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:10:11.545456 kubelet[2509]: E0905 00:10:11.545399 2509 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:10:11.548087 kubelet[2509]: I0905 00:10:11.548046 2509 factory.go:223] Registration of the containerd container factory successfully Sep 5 00:10:11.561135 kubelet[2509]: I0905 00:10:11.561082 2509 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 00:10:11.562869 kubelet[2509]: I0905 00:10:11.562833 2509 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 00:10:11.562930 kubelet[2509]: I0905 00:10:11.562875 2509 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 00:10:11.562930 kubelet[2509]: I0905 00:10:11.562905 2509 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:10:11.562930 kubelet[2509]: I0905 00:10:11.562917 2509 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 00:10:11.563005 kubelet[2509]: E0905 00:10:11.562967 2509 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:10:11.594017 kubelet[2509]: I0905 00:10:11.593986 2509 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:10:11.594017 kubelet[2509]: I0905 00:10:11.594010 2509 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:10:11.594187 kubelet[2509]: I0905 00:10:11.594034 2509 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:10:11.594237 kubelet[2509]: I0905 00:10:11.594214 2509 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:10:11.594268 kubelet[2509]: I0905 00:10:11.594242 2509 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:10:11.594268 kubelet[2509]: I0905 00:10:11.594265 2509 policy_none.go:49] "None policy: Start" Sep 5 00:10:11.594324 kubelet[2509]: I0905 00:10:11.594278 2509 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:10:11.594324 kubelet[2509]: I0905 00:10:11.594292 2509 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:10:11.594455 kubelet[2509]: I0905 00:10:11.594440 2509 state_mem.go:75] "Updated machine memory state" Sep 5 00:10:11.599060 kubelet[2509]: E0905 00:10:11.599043 2509 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 00:10:11.599448 kubelet[2509]: I0905 00:10:11.599358 2509 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:10:11.599448 kubelet[2509]: I0905 00:10:11.599388 2509 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:10:11.599878 kubelet[2509]: I0905 00:10:11.599712 2509 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:10:11.600341 kubelet[2509]: E0905 00:10:11.600282 2509 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:10:11.664675 kubelet[2509]: I0905 00:10:11.664629 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:10:11.664806 kubelet[2509]: I0905 00:10:11.664710 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:10:11.664806 kubelet[2509]: I0905 00:10:11.664629 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:10:11.706968 kubelet[2509]: I0905 00:10:11.706822 2509 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:10:11.713821 kubelet[2509]: I0905 00:10:11.713775 2509 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 00:10:11.714068 kubelet[2509]: I0905 00:10:11.713882 2509 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:10:11.744905 kubelet[2509]: I0905 00:10:11.744854 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:10:11.744905 kubelet[2509]: I0905 00:10:11.744892 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:10:11.745071 kubelet[2509]: I0905 00:10:11.744916 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:10:11.745071 kubelet[2509]: I0905 00:10:11.744946 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:10:11.745071 kubelet[2509]: I0905 00:10:11.744967 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50092c729b6bc407603dcfacd9a9478b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"50092c729b6bc407603dcfacd9a9478b\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:10:11.745071 kubelet[2509]: I0905 00:10:11.744989 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:10:11.745071 kubelet[2509]: I0905 00:10:11.745041 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:10:11.745197 kubelet[2509]: I0905 00:10:11.745078 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50092c729b6bc407603dcfacd9a9478b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"50092c729b6bc407603dcfacd9a9478b\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:10:11.745197 kubelet[2509]: I0905 00:10:11.745102 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50092c729b6bc407603dcfacd9a9478b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"50092c729b6bc407603dcfacd9a9478b\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:10:11.975131 kubelet[2509]: E0905 00:10:11.974978 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:11.975973 kubelet[2509]: E0905 00:10:11.975947 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:11.976051 kubelet[2509]: E0905 00:10:11.975951 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:12.531225 kubelet[2509]: I0905 00:10:12.531166 2509 apiserver.go:52] "Watching apiserver" Sep 5 00:10:12.545050 kubelet[2509]: I0905 00:10:12.544993 2509 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:10:12.577792 kubelet[2509]: I0905 00:10:12.577349 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:10:12.577792 kubelet[2509]: E0905 00:10:12.577670 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:12.578111 kubelet[2509]: I0905 00:10:12.578096 2509 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:10:12.583954 kubelet[2509]: E0905 00:10:12.583901 2509 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:10:12.584124 kubelet[2509]: E0905 00:10:12.584021 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:12.863183 kubelet[2509]: E0905 00:10:12.862984 2509 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:10:12.863331 kubelet[2509]: E0905 00:10:12.863197 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:13.069558 kubelet[2509]: I0905 00:10:13.069463 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.068811184 podStartE2EDuration="2.068811184s" podCreationTimestamp="2025-09-05 00:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:10:13.060119803 +0000 UTC m=+1.594199381" watchObservedRunningTime="2025-09-05 00:10:13.068811184 +0000 UTC m=+1.602890762" Sep 5 00:10:13.069765 kubelet[2509]: I0905 00:10:13.069605 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.0696003 podStartE2EDuration="2.0696003s" podCreationTimestamp="2025-09-05 00:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:10:13.068713187 +0000 UTC m=+1.602792765" watchObservedRunningTime="2025-09-05 00:10:13.0696003 +0000 UTC m=+1.603679878" Sep 5 00:10:13.080621 kubelet[2509]: I0905 00:10:13.079636 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.079589007 podStartE2EDuration="2.079589007s" podCreationTimestamp="2025-09-05 00:10:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:10:13.077344573 +0000 UTC m=+1.611424151" watchObservedRunningTime="2025-09-05 00:10:13.079589007 +0000 UTC m=+1.613668596" Sep 5 00:10:13.578546 kubelet[2509]: E0905 00:10:13.578503 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:13.579079 kubelet[2509]: E0905 00:10:13.578759 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:16.859197 kubelet[2509]: I0905 00:10:16.859134 2509 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:10:16.859774 kubelet[2509]: I0905 00:10:16.859622 2509 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:10:16.859837 containerd[1460]: time="2025-09-05T00:10:16.859465117Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:10:17.938504 systemd[1]: Created slice kubepods-besteffort-pod4bd4bbf4_d18e_4374_b914_241579b04bc1.slice - libcontainer container kubepods-besteffort-pod4bd4bbf4_d18e_4374_b914_241579b04bc1.slice. Sep 5 00:10:17.984341 kubelet[2509]: I0905 00:10:17.984252 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4bd4bbf4-d18e-4374-b914-241579b04bc1-lib-modules\") pod \"kube-proxy-d4wdg\" (UID: \"4bd4bbf4-d18e-4374-b914-241579b04bc1\") " pod="kube-system/kube-proxy-d4wdg" Sep 5 00:10:17.984341 kubelet[2509]: I0905 00:10:17.984327 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6msts\" (UniqueName: \"kubernetes.io/projected/4bd4bbf4-d18e-4374-b914-241579b04bc1-kube-api-access-6msts\") pod \"kube-proxy-d4wdg\" (UID: \"4bd4bbf4-d18e-4374-b914-241579b04bc1\") " pod="kube-system/kube-proxy-d4wdg" Sep 5 00:10:17.984876 kubelet[2509]: I0905 00:10:17.984365 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4bd4bbf4-d18e-4374-b914-241579b04bc1-kube-proxy\") pod \"kube-proxy-d4wdg\" (UID: \"4bd4bbf4-d18e-4374-b914-241579b04bc1\") " pod="kube-system/kube-proxy-d4wdg" Sep 5 00:10:17.984876 kubelet[2509]: I0905 00:10:17.984389 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4bd4bbf4-d18e-4374-b914-241579b04bc1-xtables-lock\") pod \"kube-proxy-d4wdg\" (UID: \"4bd4bbf4-d18e-4374-b914-241579b04bc1\") " pod="kube-system/kube-proxy-d4wdg" Sep 5 00:10:18.055520 systemd[1]: Created slice kubepods-besteffort-pod0d5cdd78_489e_4e3a_ba79_a8626de50ccd.slice - libcontainer container kubepods-besteffort-pod0d5cdd78_489e_4e3a_ba79_a8626de50ccd.slice. Sep 5 00:10:18.085579 kubelet[2509]: I0905 00:10:18.085521 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0d5cdd78-489e-4e3a-ba79-a8626de50ccd-var-lib-calico\") pod \"tigera-operator-755d956888-gqdl2\" (UID: \"0d5cdd78-489e-4e3a-ba79-a8626de50ccd\") " pod="tigera-operator/tigera-operator-755d956888-gqdl2" Sep 5 00:10:18.085579 kubelet[2509]: I0905 00:10:18.085570 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfwxz\" (UniqueName: \"kubernetes.io/projected/0d5cdd78-489e-4e3a-ba79-a8626de50ccd-kube-api-access-bfwxz\") pod \"tigera-operator-755d956888-gqdl2\" (UID: \"0d5cdd78-489e-4e3a-ba79-a8626de50ccd\") " pod="tigera-operator/tigera-operator-755d956888-gqdl2" Sep 5 00:10:18.249970 kubelet[2509]: E0905 00:10:18.249807 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:18.250593 containerd[1460]: time="2025-09-05T00:10:18.250550932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4wdg,Uid:4bd4bbf4-d18e-4374-b914-241579b04bc1,Namespace:kube-system,Attempt:0,}" Sep 5 00:10:18.283976 containerd[1460]: time="2025-09-05T00:10:18.283630919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:10:18.283976 containerd[1460]: time="2025-09-05T00:10:18.283750376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:10:18.283976 containerd[1460]: time="2025-09-05T00:10:18.283769092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:18.283976 containerd[1460]: time="2025-09-05T00:10:18.283912264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:18.310698 systemd[1]: Started cri-containerd-2399b0c7b473ce72a15127623983ad1ad99a5fbb59102b07a5d5114266637cf9.scope - libcontainer container 2399b0c7b473ce72a15127623983ad1ad99a5fbb59102b07a5d5114266637cf9. Sep 5 00:10:18.337502 containerd[1460]: time="2025-09-05T00:10:18.337451305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-d4wdg,Uid:4bd4bbf4-d18e-4374-b914-241579b04bc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"2399b0c7b473ce72a15127623983ad1ad99a5fbb59102b07a5d5114266637cf9\"" Sep 5 00:10:18.338281 kubelet[2509]: E0905 00:10:18.338238 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:18.359789 containerd[1460]: time="2025-09-05T00:10:18.359742669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-gqdl2,Uid:0d5cdd78-489e-4e3a-ba79-a8626de50ccd,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:10:18.410307 containerd[1460]: time="2025-09-05T00:10:18.410253578Z" level=info msg="CreateContainer within sandbox \"2399b0c7b473ce72a15127623983ad1ad99a5fbb59102b07a5d5114266637cf9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:10:18.437723 containerd[1460]: time="2025-09-05T00:10:18.437670939Z" level=info msg="CreateContainer within sandbox \"2399b0c7b473ce72a15127623983ad1ad99a5fbb59102b07a5d5114266637cf9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c70593a34d79f1c9af8be6a05ea41fd79e30daa58222ca071075d292167e53c4\"" Sep 5 00:10:18.438746 containerd[1460]: time="2025-09-05T00:10:18.438697429Z" level=info msg="StartContainer for \"c70593a34d79f1c9af8be6a05ea41fd79e30daa58222ca071075d292167e53c4\"" Sep 5 00:10:18.443744 containerd[1460]: time="2025-09-05T00:10:18.443643615Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:10:18.443744 containerd[1460]: time="2025-09-05T00:10:18.443699502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:10:18.443744 containerd[1460]: time="2025-09-05T00:10:18.443710883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:18.443953 containerd[1460]: time="2025-09-05T00:10:18.443818928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:18.475664 systemd[1]: Started cri-containerd-d68e93cc85760b6793cf146734ada69423cf0214d6907fd790488deb6d342d44.scope - libcontainer container d68e93cc85760b6793cf146734ada69423cf0214d6907fd790488deb6d342d44. Sep 5 00:10:18.479314 systemd[1]: Started cri-containerd-c70593a34d79f1c9af8be6a05ea41fd79e30daa58222ca071075d292167e53c4.scope - libcontainer container c70593a34d79f1c9af8be6a05ea41fd79e30daa58222ca071075d292167e53c4. Sep 5 00:10:18.523572 containerd[1460]: time="2025-09-05T00:10:18.522080141Z" level=info msg="StartContainer for \"c70593a34d79f1c9af8be6a05ea41fd79e30daa58222ca071075d292167e53c4\" returns successfully" Sep 5 00:10:18.523572 containerd[1460]: time="2025-09-05T00:10:18.522227711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-gqdl2,Uid:0d5cdd78-489e-4e3a-ba79-a8626de50ccd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d68e93cc85760b6793cf146734ada69423cf0214d6907fd790488deb6d342d44\"" Sep 5 00:10:18.526747 containerd[1460]: time="2025-09-05T00:10:18.526714784Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:10:18.540761 kubelet[2509]: E0905 00:10:18.540692 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:18.589416 kubelet[2509]: E0905 00:10:18.589056 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:18.589416 kubelet[2509]: E0905 00:10:18.589321 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:18.603354 kubelet[2509]: I0905 00:10:18.603279 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-d4wdg" podStartSLOduration=1.603257202 podStartE2EDuration="1.603257202s" podCreationTimestamp="2025-09-05 00:10:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:10:18.60291375 +0000 UTC m=+7.136993338" watchObservedRunningTime="2025-09-05 00:10:18.603257202 +0000 UTC m=+7.137336780" Sep 5 00:10:19.589601 kubelet[2509]: E0905 00:10:19.589547 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:19.771833 kubelet[2509]: E0905 00:10:19.771794 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:19.984616 update_engine[1447]: I20250905 00:10:19.984464 1447 update_attempter.cc:509] Updating boot flags... Sep 5 00:10:20.040041 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2820) Sep 5 00:10:20.083871 kubelet[2509]: E0905 00:10:20.083832 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:20.098906 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2822) Sep 5 00:10:20.107606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1780687303.mount: Deactivated successfully. Sep 5 00:10:20.592224 kubelet[2509]: E0905 00:10:20.591731 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:20.592224 kubelet[2509]: E0905 00:10:20.591747 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:20.730806 containerd[1460]: time="2025-09-05T00:10:20.730735782Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:20.731444 containerd[1460]: time="2025-09-05T00:10:20.731374754Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 5 00:10:20.732671 containerd[1460]: time="2025-09-05T00:10:20.732645384Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:20.734913 containerd[1460]: time="2025-09-05T00:10:20.734870485Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:20.735530 containerd[1460]: time="2025-09-05T00:10:20.735503905Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.208652101s" Sep 5 00:10:20.735568 containerd[1460]: time="2025-09-05T00:10:20.735534013Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 5 00:10:20.740964 containerd[1460]: time="2025-09-05T00:10:20.740928084Z" level=info msg="CreateContainer within sandbox \"d68e93cc85760b6793cf146734ada69423cf0214d6907fd790488deb6d342d44\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:10:20.753791 containerd[1460]: time="2025-09-05T00:10:20.753737418Z" level=info msg="CreateContainer within sandbox \"d68e93cc85760b6793cf146734ada69423cf0214d6907fd790488deb6d342d44\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"937ec459e79907ea8584cd5dc86b2c7ddc29a9278a90fe5ccc7f5b60492e524c\"" Sep 5 00:10:20.754275 containerd[1460]: time="2025-09-05T00:10:20.754241053Z" level=info msg="StartContainer for \"937ec459e79907ea8584cd5dc86b2c7ddc29a9278a90fe5ccc7f5b60492e524c\"" Sep 5 00:10:20.785557 systemd[1]: Started cri-containerd-937ec459e79907ea8584cd5dc86b2c7ddc29a9278a90fe5ccc7f5b60492e524c.scope - libcontainer container 937ec459e79907ea8584cd5dc86b2c7ddc29a9278a90fe5ccc7f5b60492e524c. Sep 5 00:10:20.817195 containerd[1460]: time="2025-09-05T00:10:20.817140157Z" level=info msg="StartContainer for \"937ec459e79907ea8584cd5dc86b2c7ddc29a9278a90fe5ccc7f5b60492e524c\" returns successfully" Sep 5 00:10:21.603390 kubelet[2509]: I0905 00:10:21.603322 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-gqdl2" podStartSLOduration=1.393240841 podStartE2EDuration="3.603305001s" podCreationTimestamp="2025-09-05 00:10:18 +0000 UTC" firstStartedPulling="2025-09-05 00:10:18.526284578 +0000 UTC m=+7.060364156" lastFinishedPulling="2025-09-05 00:10:20.736348748 +0000 UTC m=+9.270428316" observedRunningTime="2025-09-05 00:10:21.6032098 +0000 UTC m=+10.137289378" watchObservedRunningTime="2025-09-05 00:10:21.603305001 +0000 UTC m=+10.137384579" Sep 5 00:10:26.884459 sudo[1634]: pam_unix(sudo:session): session closed for user root Sep 5 00:10:26.886546 sshd[1631]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:26.890337 systemd[1]: sshd@6-10.0.0.52:22-10.0.0.1:55356.service: Deactivated successfully. Sep 5 00:10:26.893276 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:10:26.893636 systemd[1]: session-7.scope: Consumed 6.259s CPU time, 160.0M memory peak, 0B memory swap peak. Sep 5 00:10:26.895041 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:10:26.896870 systemd-logind[1443]: Removed session 7. Sep 5 00:10:29.334491 systemd[1]: Created slice kubepods-besteffort-pod9e3ebf91_609d_4637_81c5_5749fdb8b491.slice - libcontainer container kubepods-besteffort-pod9e3ebf91_609d_4637_81c5_5749fdb8b491.slice. Sep 5 00:10:29.354925 kubelet[2509]: I0905 00:10:29.354850 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e3ebf91-609d-4637-81c5-5749fdb8b491-tigera-ca-bundle\") pod \"calico-typha-555dbd6bbd-ks9nb\" (UID: \"9e3ebf91-609d-4637-81c5-5749fdb8b491\") " pod="calico-system/calico-typha-555dbd6bbd-ks9nb" Sep 5 00:10:29.354925 kubelet[2509]: I0905 00:10:29.354919 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmk8h\" (UniqueName: \"kubernetes.io/projected/9e3ebf91-609d-4637-81c5-5749fdb8b491-kube-api-access-wmk8h\") pod \"calico-typha-555dbd6bbd-ks9nb\" (UID: \"9e3ebf91-609d-4637-81c5-5749fdb8b491\") " pod="calico-system/calico-typha-555dbd6bbd-ks9nb" Sep 5 00:10:29.355401 kubelet[2509]: I0905 00:10:29.354941 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9e3ebf91-609d-4637-81c5-5749fdb8b491-typha-certs\") pod \"calico-typha-555dbd6bbd-ks9nb\" (UID: \"9e3ebf91-609d-4637-81c5-5749fdb8b491\") " pod="calico-system/calico-typha-555dbd6bbd-ks9nb" Sep 5 00:10:29.638537 kubelet[2509]: E0905 00:10:29.638061 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:29.639583 containerd[1460]: time="2025-09-05T00:10:29.639108236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-555dbd6bbd-ks9nb,Uid:9e3ebf91-609d-4637-81c5-5749fdb8b491,Namespace:calico-system,Attempt:0,}" Sep 5 00:10:29.862117 systemd[1]: Created slice kubepods-besteffort-pod33379ea3_a339_4a33_bc41_0e97e7a77b01.slice - libcontainer container kubepods-besteffort-pod33379ea3_a339_4a33_bc41_0e97e7a77b01.slice. Sep 5 00:10:29.863354 containerd[1460]: time="2025-09-05T00:10:29.863189815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:10:29.863354 containerd[1460]: time="2025-09-05T00:10:29.863268945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:10:29.863354 containerd[1460]: time="2025-09-05T00:10:29.863280186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:29.864445 containerd[1460]: time="2025-09-05T00:10:29.863687024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:29.900582 systemd[1]: Started cri-containerd-6e80af932530ffaf9c352ca66156eee2ffce1c5bf78d5971cbf8b8bcf2d23307.scope - libcontainer container 6e80af932530ffaf9c352ca66156eee2ffce1c5bf78d5971cbf8b8bcf2d23307. Sep 5 00:10:29.942986 containerd[1460]: time="2025-09-05T00:10:29.942900666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-555dbd6bbd-ks9nb,Uid:9e3ebf91-609d-4637-81c5-5749fdb8b491,Namespace:calico-system,Attempt:0,} returns sandbox id \"6e80af932530ffaf9c352ca66156eee2ffce1c5bf78d5971cbf8b8bcf2d23307\"" Sep 5 00:10:29.946771 kubelet[2509]: E0905 00:10:29.946684 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:29.949615 containerd[1460]: time="2025-09-05T00:10:29.949581283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:10:29.959214 kubelet[2509]: I0905 00:10:29.959155 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33379ea3-a339-4a33-bc41-0e97e7a77b01-xtables-lock\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.959214 kubelet[2509]: I0905 00:10:29.959210 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/33379ea3-a339-4a33-bc41-0e97e7a77b01-var-run-calico\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.959416 kubelet[2509]: I0905 00:10:29.959238 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7pcj\" (UniqueName: \"kubernetes.io/projected/33379ea3-a339-4a33-bc41-0e97e7a77b01-kube-api-access-c7pcj\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.959416 kubelet[2509]: I0905 00:10:29.959265 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/33379ea3-a339-4a33-bc41-0e97e7a77b01-cni-log-dir\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.959416 kubelet[2509]: I0905 00:10:29.959285 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/33379ea3-a339-4a33-bc41-0e97e7a77b01-var-lib-calico\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.959416 kubelet[2509]: I0905 00:10:29.959306 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/33379ea3-a339-4a33-bc41-0e97e7a77b01-policysync\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.959416 kubelet[2509]: I0905 00:10:29.959325 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/33379ea3-a339-4a33-bc41-0e97e7a77b01-node-certs\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.959648 kubelet[2509]: I0905 00:10:29.959357 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33379ea3-a339-4a33-bc41-0e97e7a77b01-lib-modules\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.959648 kubelet[2509]: I0905 00:10:29.959380 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/33379ea3-a339-4a33-bc41-0e97e7a77b01-cni-net-dir\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.959648 kubelet[2509]: I0905 00:10:29.959400 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/33379ea3-a339-4a33-bc41-0e97e7a77b01-flexvol-driver-host\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.959648 kubelet[2509]: I0905 00:10:29.959452 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/33379ea3-a339-4a33-bc41-0e97e7a77b01-cni-bin-dir\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.959648 kubelet[2509]: I0905 00:10:29.959471 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33379ea3-a339-4a33-bc41-0e97e7a77b01-tigera-ca-bundle\") pod \"calico-node-8b82m\" (UID: \"33379ea3-a339-4a33-bc41-0e97e7a77b01\") " pod="calico-system/calico-node-8b82m" Sep 5 00:10:29.964492 kubelet[2509]: E0905 00:10:29.964336 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:30.059941 kubelet[2509]: I0905 00:10:30.059881 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg2lz\" (UniqueName: \"kubernetes.io/projected/d4aa0d59-f65d-4bd5-953e-3a3464571ba3-kube-api-access-bg2lz\") pod \"csi-node-driver-54j5k\" (UID: \"d4aa0d59-f65d-4bd5-953e-3a3464571ba3\") " pod="calico-system/csi-node-driver-54j5k" Sep 5 00:10:30.060128 kubelet[2509]: I0905 00:10:30.059972 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d4aa0d59-f65d-4bd5-953e-3a3464571ba3-socket-dir\") pod \"csi-node-driver-54j5k\" (UID: \"d4aa0d59-f65d-4bd5-953e-3a3464571ba3\") " pod="calico-system/csi-node-driver-54j5k" Sep 5 00:10:30.060128 kubelet[2509]: I0905 00:10:30.060006 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d4aa0d59-f65d-4bd5-953e-3a3464571ba3-registration-dir\") pod \"csi-node-driver-54j5k\" (UID: \"d4aa0d59-f65d-4bd5-953e-3a3464571ba3\") " pod="calico-system/csi-node-driver-54j5k" Sep 5 00:10:30.060128 kubelet[2509]: I0905 00:10:30.060048 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d4aa0d59-f65d-4bd5-953e-3a3464571ba3-kubelet-dir\") pod \"csi-node-driver-54j5k\" (UID: \"d4aa0d59-f65d-4bd5-953e-3a3464571ba3\") " pod="calico-system/csi-node-driver-54j5k" Sep 5 00:10:30.060128 kubelet[2509]: I0905 00:10:30.060074 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d4aa0d59-f65d-4bd5-953e-3a3464571ba3-varrun\") pod \"csi-node-driver-54j5k\" (UID: \"d4aa0d59-f65d-4bd5-953e-3a3464571ba3\") " pod="calico-system/csi-node-driver-54j5k" Sep 5 00:10:30.065103 kubelet[2509]: E0905 00:10:30.064873 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.065103 kubelet[2509]: W0905 00:10:30.064928 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.065103 kubelet[2509]: E0905 00:10:30.065012 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.065708 kubelet[2509]: E0905 00:10:30.065672 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.065708 kubelet[2509]: W0905 00:10:30.065703 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.065798 kubelet[2509]: E0905 00:10:30.065715 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.066650 kubelet[2509]: E0905 00:10:30.066542 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.066650 kubelet[2509]: W0905 00:10:30.066570 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.066650 kubelet[2509]: E0905 00:10:30.066600 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.069600 kubelet[2509]: E0905 00:10:30.069570 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.069600 kubelet[2509]: W0905 00:10:30.069587 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.069600 kubelet[2509]: E0905 00:10:30.069599 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.161294 kubelet[2509]: E0905 00:10:30.161101 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.161294 kubelet[2509]: W0905 00:10:30.161128 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.161294 kubelet[2509]: E0905 00:10:30.161152 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.161550 kubelet[2509]: E0905 00:10:30.161409 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.161550 kubelet[2509]: W0905 00:10:30.161444 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.161550 kubelet[2509]: E0905 00:10:30.161457 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.161754 kubelet[2509]: E0905 00:10:30.161725 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.161754 kubelet[2509]: W0905 00:10:30.161754 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.161918 kubelet[2509]: E0905 00:10:30.161780 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.162128 kubelet[2509]: E0905 00:10:30.162090 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.162128 kubelet[2509]: W0905 00:10:30.162107 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.162128 kubelet[2509]: E0905 00:10:30.162119 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.162347 kubelet[2509]: E0905 00:10:30.162328 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.162347 kubelet[2509]: W0905 00:10:30.162343 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.162474 kubelet[2509]: E0905 00:10:30.162352 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.162880 kubelet[2509]: E0905 00:10:30.162860 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.162880 kubelet[2509]: W0905 00:10:30.162878 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.162967 kubelet[2509]: E0905 00:10:30.162891 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.163186 kubelet[2509]: E0905 00:10:30.163161 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.163186 kubelet[2509]: W0905 00:10:30.163175 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.163242 kubelet[2509]: E0905 00:10:30.163186 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.163412 kubelet[2509]: E0905 00:10:30.163398 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.163412 kubelet[2509]: W0905 00:10:30.163409 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.163490 kubelet[2509]: E0905 00:10:30.163418 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.163720 kubelet[2509]: E0905 00:10:30.163705 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.163720 kubelet[2509]: W0905 00:10:30.163717 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.163786 kubelet[2509]: E0905 00:10:30.163726 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.164005 kubelet[2509]: E0905 00:10:30.163991 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.164038 kubelet[2509]: W0905 00:10:30.164003 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.164038 kubelet[2509]: E0905 00:10:30.164013 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.164270 kubelet[2509]: E0905 00:10:30.164256 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.164270 kubelet[2509]: W0905 00:10:30.164268 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.164342 kubelet[2509]: E0905 00:10:30.164280 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.164540 kubelet[2509]: E0905 00:10:30.164527 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.164540 kubelet[2509]: W0905 00:10:30.164538 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.164612 kubelet[2509]: E0905 00:10:30.164547 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.164769 kubelet[2509]: E0905 00:10:30.164755 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.164769 kubelet[2509]: W0905 00:10:30.164766 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.164830 kubelet[2509]: E0905 00:10:30.164774 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.165016 kubelet[2509]: E0905 00:10:30.165003 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.165016 kubelet[2509]: W0905 00:10:30.165013 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.165209 kubelet[2509]: E0905 00:10:30.165021 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.165239 kubelet[2509]: E0905 00:10:30.165217 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.165239 kubelet[2509]: W0905 00:10:30.165225 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.165239 kubelet[2509]: E0905 00:10:30.165233 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.165446 kubelet[2509]: E0905 00:10:30.165416 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.165446 kubelet[2509]: W0905 00:10:30.165444 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.165549 kubelet[2509]: E0905 00:10:30.165456 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.165580 containerd[1460]: time="2025-09-05T00:10:30.165493774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8b82m,Uid:33379ea3-a339-4a33-bc41-0e97e7a77b01,Namespace:calico-system,Attempt:0,}" Sep 5 00:10:30.165721 kubelet[2509]: E0905 00:10:30.165705 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.165721 kubelet[2509]: W0905 00:10:30.165719 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.165815 kubelet[2509]: E0905 00:10:30.165731 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.165993 kubelet[2509]: E0905 00:10:30.165975 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.165993 kubelet[2509]: W0905 00:10:30.165990 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.166078 kubelet[2509]: E0905 00:10:30.166003 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.166265 kubelet[2509]: E0905 00:10:30.166252 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.166265 kubelet[2509]: W0905 00:10:30.166264 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.166400 kubelet[2509]: E0905 00:10:30.166274 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.166552 kubelet[2509]: E0905 00:10:30.166539 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.166597 kubelet[2509]: W0905 00:10:30.166551 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.166597 kubelet[2509]: E0905 00:10:30.166561 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.166803 kubelet[2509]: E0905 00:10:30.166789 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.166803 kubelet[2509]: W0905 00:10:30.166801 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.166861 kubelet[2509]: E0905 00:10:30.166813 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.167083 kubelet[2509]: E0905 00:10:30.167068 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.167083 kubelet[2509]: W0905 00:10:30.167081 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.167150 kubelet[2509]: E0905 00:10:30.167093 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.167307 kubelet[2509]: E0905 00:10:30.167294 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.167307 kubelet[2509]: W0905 00:10:30.167305 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.167369 kubelet[2509]: E0905 00:10:30.167314 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.167558 kubelet[2509]: E0905 00:10:30.167545 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.167558 kubelet[2509]: W0905 00:10:30.167555 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.167605 kubelet[2509]: E0905 00:10:30.167564 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.167911 kubelet[2509]: E0905 00:10:30.167895 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.167911 kubelet[2509]: W0905 00:10:30.167908 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.167994 kubelet[2509]: E0905 00:10:30.167920 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.178326 kubelet[2509]: E0905 00:10:30.178289 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:30.178326 kubelet[2509]: W0905 00:10:30.178313 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:30.178326 kubelet[2509]: E0905 00:10:30.178335 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:30.201256 containerd[1460]: time="2025-09-05T00:10:30.200924037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:10:30.201256 containerd[1460]: time="2025-09-05T00:10:30.201024446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:10:30.201256 containerd[1460]: time="2025-09-05T00:10:30.201047389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:30.201256 containerd[1460]: time="2025-09-05T00:10:30.201162526Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:10:30.223628 systemd[1]: Started cri-containerd-4df63a877e303c89c04e7f536d03814e2a6d94e20c732b43e7f99581c7e15b13.scope - libcontainer container 4df63a877e303c89c04e7f536d03814e2a6d94e20c732b43e7f99581c7e15b13. Sep 5 00:10:30.253300 containerd[1460]: time="2025-09-05T00:10:30.253228539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8b82m,Uid:33379ea3-a339-4a33-bc41-0e97e7a77b01,Namespace:calico-system,Attempt:0,} returns sandbox id \"4df63a877e303c89c04e7f536d03814e2a6d94e20c732b43e7f99581c7e15b13\"" Sep 5 00:10:31.565183 kubelet[2509]: E0905 00:10:31.564475 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:33.564203 kubelet[2509]: E0905 00:10:33.564120 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:33.797554 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3825812538.mount: Deactivated successfully. Sep 5 00:10:35.293319 containerd[1460]: time="2025-09-05T00:10:35.293262290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:35.320501 containerd[1460]: time="2025-09-05T00:10:35.320446825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 5 00:10:35.357877 containerd[1460]: time="2025-09-05T00:10:35.357832690Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:35.406910 containerd[1460]: time="2025-09-05T00:10:35.406843217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:35.407504 containerd[1460]: time="2025-09-05T00:10:35.407472662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 5.45785493s" Sep 5 00:10:35.407561 containerd[1460]: time="2025-09-05T00:10:35.407505273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 5 00:10:35.536591 containerd[1460]: time="2025-09-05T00:10:35.536533811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:10:35.564120 kubelet[2509]: E0905 00:10:35.564015 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:35.663159 containerd[1460]: time="2025-09-05T00:10:35.663108596Z" level=info msg="CreateContainer within sandbox \"6e80af932530ffaf9c352ca66156eee2ffce1c5bf78d5971cbf8b8bcf2d23307\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:10:37.334186 containerd[1460]: time="2025-09-05T00:10:37.334114684Z" level=info msg="CreateContainer within sandbox \"6e80af932530ffaf9c352ca66156eee2ffce1c5bf78d5971cbf8b8bcf2d23307\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ea03b4ef2764111faf76686c0ee79b2fff135e2977cd0a4fc3b14b1ce230fca8\"" Sep 5 00:10:37.334855 containerd[1460]: time="2025-09-05T00:10:37.334807628Z" level=info msg="StartContainer for \"ea03b4ef2764111faf76686c0ee79b2fff135e2977cd0a4fc3b14b1ce230fca8\"" Sep 5 00:10:37.362254 systemd[1]: run-containerd-runc-k8s.io-ea03b4ef2764111faf76686c0ee79b2fff135e2977cd0a4fc3b14b1ce230fca8-runc.msEtIt.mount: Deactivated successfully. Sep 5 00:10:37.376705 systemd[1]: Started cri-containerd-ea03b4ef2764111faf76686c0ee79b2fff135e2977cd0a4fc3b14b1ce230fca8.scope - libcontainer container ea03b4ef2764111faf76686c0ee79b2fff135e2977cd0a4fc3b14b1ce230fca8. Sep 5 00:10:37.422763 containerd[1460]: time="2025-09-05T00:10:37.422593730Z" level=info msg="StartContainer for \"ea03b4ef2764111faf76686c0ee79b2fff135e2977cd0a4fc3b14b1ce230fca8\" returns successfully" Sep 5 00:10:37.566453 kubelet[2509]: E0905 00:10:37.563783 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:37.639993 kubelet[2509]: E0905 00:10:37.639845 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:37.693265 kubelet[2509]: E0905 00:10:37.693224 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.693265 kubelet[2509]: W0905 00:10:37.693248 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.693265 kubelet[2509]: E0905 00:10:37.693269 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.693586 kubelet[2509]: E0905 00:10:37.693566 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.693586 kubelet[2509]: W0905 00:10:37.693580 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.693648 kubelet[2509]: E0905 00:10:37.693590 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.693857 kubelet[2509]: E0905 00:10:37.693834 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.693857 kubelet[2509]: W0905 00:10:37.693846 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.693857 kubelet[2509]: E0905 00:10:37.693854 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.694083 kubelet[2509]: E0905 00:10:37.694065 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.694083 kubelet[2509]: W0905 00:10:37.694077 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.694139 kubelet[2509]: E0905 00:10:37.694085 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.694338 kubelet[2509]: E0905 00:10:37.694320 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.694338 kubelet[2509]: W0905 00:10:37.694331 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.694338 kubelet[2509]: E0905 00:10:37.694340 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.694642 kubelet[2509]: E0905 00:10:37.694606 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.694642 kubelet[2509]: W0905 00:10:37.694628 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.694642 kubelet[2509]: E0905 00:10:37.694652 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.694955 kubelet[2509]: E0905 00:10:37.694937 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.694955 kubelet[2509]: W0905 00:10:37.694952 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.695006 kubelet[2509]: E0905 00:10:37.694966 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.695215 kubelet[2509]: E0905 00:10:37.695197 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.695215 kubelet[2509]: W0905 00:10:37.695212 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.695272 kubelet[2509]: E0905 00:10:37.695223 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.695536 kubelet[2509]: E0905 00:10:37.695518 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.695536 kubelet[2509]: W0905 00:10:37.695534 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.695601 kubelet[2509]: E0905 00:10:37.695547 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.695796 kubelet[2509]: E0905 00:10:37.695778 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.695796 kubelet[2509]: W0905 00:10:37.695791 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.695868 kubelet[2509]: E0905 00:10:37.695803 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.696046 kubelet[2509]: E0905 00:10:37.696028 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.696046 kubelet[2509]: W0905 00:10:37.696043 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.696100 kubelet[2509]: E0905 00:10:37.696054 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.696294 kubelet[2509]: E0905 00:10:37.696276 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.696294 kubelet[2509]: W0905 00:10:37.696292 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.696344 kubelet[2509]: E0905 00:10:37.696303 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.696579 kubelet[2509]: E0905 00:10:37.696560 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.696613 kubelet[2509]: W0905 00:10:37.696577 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.696613 kubelet[2509]: E0905 00:10:37.696592 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.696836 kubelet[2509]: E0905 00:10:37.696818 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.696836 kubelet[2509]: W0905 00:10:37.696833 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.696889 kubelet[2509]: E0905 00:10:37.696846 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.697097 kubelet[2509]: E0905 00:10:37.697069 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.697097 kubelet[2509]: W0905 00:10:37.697086 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.697147 kubelet[2509]: E0905 00:10:37.697096 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.716525 kubelet[2509]: E0905 00:10:37.716473 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.716525 kubelet[2509]: W0905 00:10:37.716491 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.716525 kubelet[2509]: E0905 00:10:37.716501 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.716837 kubelet[2509]: E0905 00:10:37.716800 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.716837 kubelet[2509]: W0905 00:10:37.716822 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.716837 kubelet[2509]: E0905 00:10:37.716838 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.717118 kubelet[2509]: E0905 00:10:37.717089 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.717118 kubelet[2509]: W0905 00:10:37.717104 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.717118 kubelet[2509]: E0905 00:10:37.717114 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.717569 kubelet[2509]: E0905 00:10:37.717526 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.717569 kubelet[2509]: W0905 00:10:37.717559 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.717626 kubelet[2509]: E0905 00:10:37.717586 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.717863 kubelet[2509]: E0905 00:10:37.717840 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.717863 kubelet[2509]: W0905 00:10:37.717851 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.717863 kubelet[2509]: E0905 00:10:37.717859 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.718112 kubelet[2509]: E0905 00:10:37.718090 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.718112 kubelet[2509]: W0905 00:10:37.718102 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.718112 kubelet[2509]: E0905 00:10:37.718110 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.718384 kubelet[2509]: E0905 00:10:37.718362 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.718384 kubelet[2509]: W0905 00:10:37.718372 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.718384 kubelet[2509]: E0905 00:10:37.718381 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.718688 kubelet[2509]: E0905 00:10:37.718654 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.718688 kubelet[2509]: W0905 00:10:37.718668 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.718688 kubelet[2509]: E0905 00:10:37.718678 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.718936 kubelet[2509]: E0905 00:10:37.718920 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.718936 kubelet[2509]: W0905 00:10:37.718931 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.718986 kubelet[2509]: E0905 00:10:37.718940 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.719196 kubelet[2509]: E0905 00:10:37.719170 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.719196 kubelet[2509]: W0905 00:10:37.719184 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.719196 kubelet[2509]: E0905 00:10:37.719192 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.719409 kubelet[2509]: E0905 00:10:37.719395 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.719409 kubelet[2509]: W0905 00:10:37.719404 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.719470 kubelet[2509]: E0905 00:10:37.719412 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.719761 kubelet[2509]: E0905 00:10:37.719724 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.719761 kubelet[2509]: W0905 00:10:37.719737 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.719761 kubelet[2509]: E0905 00:10:37.719749 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.720081 kubelet[2509]: E0905 00:10:37.720054 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.720081 kubelet[2509]: W0905 00:10:37.720074 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.720081 kubelet[2509]: E0905 00:10:37.720088 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.720340 kubelet[2509]: E0905 00:10:37.720316 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.720340 kubelet[2509]: W0905 00:10:37.720332 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.720340 kubelet[2509]: E0905 00:10:37.720342 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.720636 kubelet[2509]: E0905 00:10:37.720615 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.720636 kubelet[2509]: W0905 00:10:37.720631 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.720691 kubelet[2509]: E0905 00:10:37.720643 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.720910 kubelet[2509]: E0905 00:10:37.720888 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.720910 kubelet[2509]: W0905 00:10:37.720903 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.720991 kubelet[2509]: E0905 00:10:37.720914 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.721206 kubelet[2509]: E0905 00:10:37.721186 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.721206 kubelet[2509]: W0905 00:10:37.721201 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.721263 kubelet[2509]: E0905 00:10:37.721212 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:37.721714 kubelet[2509]: E0905 00:10:37.721688 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:37.721714 kubelet[2509]: W0905 00:10:37.721704 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:37.721768 kubelet[2509]: E0905 00:10:37.721715 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.410915 kubelet[2509]: I0905 00:10:38.410825 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-555dbd6bbd-ks9nb" podStartSLOduration=3.951793576 podStartE2EDuration="9.410804335s" podCreationTimestamp="2025-09-05 00:10:29 +0000 UTC" firstStartedPulling="2025-09-05 00:10:29.949200414 +0000 UTC m=+18.483279992" lastFinishedPulling="2025-09-05 00:10:35.408211173 +0000 UTC m=+23.942290751" observedRunningTime="2025-09-05 00:10:38.382634325 +0000 UTC m=+26.916714044" watchObservedRunningTime="2025-09-05 00:10:38.410804335 +0000 UTC m=+26.944883913" Sep 5 00:10:38.637772 kubelet[2509]: I0905 00:10:38.637722 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:10:38.638183 kubelet[2509]: E0905 00:10:38.638089 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:38.704140 kubelet[2509]: E0905 00:10:38.704094 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.704140 kubelet[2509]: W0905 00:10:38.704127 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.704140 kubelet[2509]: E0905 00:10:38.704156 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.704639 kubelet[2509]: E0905 00:10:38.704607 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.704639 kubelet[2509]: W0905 00:10:38.704621 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.704639 kubelet[2509]: E0905 00:10:38.704644 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.704887 kubelet[2509]: E0905 00:10:38.704870 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.704887 kubelet[2509]: W0905 00:10:38.704882 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.704935 kubelet[2509]: E0905 00:10:38.704892 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.705158 kubelet[2509]: E0905 00:10:38.705130 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.705158 kubelet[2509]: W0905 00:10:38.705156 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.705207 kubelet[2509]: E0905 00:10:38.705165 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.705410 kubelet[2509]: E0905 00:10:38.705392 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.705410 kubelet[2509]: W0905 00:10:38.705404 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.705512 kubelet[2509]: E0905 00:10:38.705413 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.705715 kubelet[2509]: E0905 00:10:38.705699 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.705715 kubelet[2509]: W0905 00:10:38.705714 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.705782 kubelet[2509]: E0905 00:10:38.705726 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.705960 kubelet[2509]: E0905 00:10:38.705943 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.705960 kubelet[2509]: W0905 00:10:38.705954 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.706021 kubelet[2509]: E0905 00:10:38.705962 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.706197 kubelet[2509]: E0905 00:10:38.706162 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.706197 kubelet[2509]: W0905 00:10:38.706170 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.706197 kubelet[2509]: E0905 00:10:38.706194 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.706413 kubelet[2509]: E0905 00:10:38.706394 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.706413 kubelet[2509]: W0905 00:10:38.706405 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.706413 kubelet[2509]: E0905 00:10:38.706413 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.706649 kubelet[2509]: E0905 00:10:38.706633 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.706649 kubelet[2509]: W0905 00:10:38.706644 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.706705 kubelet[2509]: E0905 00:10:38.706654 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.706957 kubelet[2509]: E0905 00:10:38.706922 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.706957 kubelet[2509]: W0905 00:10:38.706953 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.707047 kubelet[2509]: E0905 00:10:38.706986 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.707461 kubelet[2509]: E0905 00:10:38.707410 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.707524 kubelet[2509]: W0905 00:10:38.707467 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.707524 kubelet[2509]: E0905 00:10:38.707483 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.707976 kubelet[2509]: E0905 00:10:38.707806 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.707976 kubelet[2509]: W0905 00:10:38.707832 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.707976 kubelet[2509]: E0905 00:10:38.707843 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.708097 kubelet[2509]: E0905 00:10:38.708077 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.708097 kubelet[2509]: W0905 00:10:38.708093 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.708147 kubelet[2509]: E0905 00:10:38.708106 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.708332 kubelet[2509]: E0905 00:10:38.708316 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.708332 kubelet[2509]: W0905 00:10:38.708328 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.708387 kubelet[2509]: E0905 00:10:38.708338 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.723471 kubelet[2509]: E0905 00:10:38.723442 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.723471 kubelet[2509]: W0905 00:10:38.723464 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.723582 kubelet[2509]: E0905 00:10:38.723495 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.723786 kubelet[2509]: E0905 00:10:38.723763 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.723786 kubelet[2509]: W0905 00:10:38.723778 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.723841 kubelet[2509]: E0905 00:10:38.723791 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.724078 kubelet[2509]: E0905 00:10:38.724059 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.724078 kubelet[2509]: W0905 00:10:38.724073 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.724132 kubelet[2509]: E0905 00:10:38.724085 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.724310 kubelet[2509]: E0905 00:10:38.724294 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.724310 kubelet[2509]: W0905 00:10:38.724307 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.724373 kubelet[2509]: E0905 00:10:38.724317 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.724559 kubelet[2509]: E0905 00:10:38.724535 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.724559 kubelet[2509]: W0905 00:10:38.724548 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.724559 kubelet[2509]: E0905 00:10:38.724557 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.724805 kubelet[2509]: E0905 00:10:38.724788 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.724805 kubelet[2509]: W0905 00:10:38.724800 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.724872 kubelet[2509]: E0905 00:10:38.724809 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.725115 kubelet[2509]: E0905 00:10:38.725096 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.725115 kubelet[2509]: W0905 00:10:38.725111 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.725166 kubelet[2509]: E0905 00:10:38.725123 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.725382 kubelet[2509]: E0905 00:10:38.725357 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.725382 kubelet[2509]: W0905 00:10:38.725371 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.725382 kubelet[2509]: E0905 00:10:38.725381 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.725625 kubelet[2509]: E0905 00:10:38.725609 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.725625 kubelet[2509]: W0905 00:10:38.725621 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.725673 kubelet[2509]: E0905 00:10:38.725630 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.725832 kubelet[2509]: E0905 00:10:38.725817 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.725832 kubelet[2509]: W0905 00:10:38.725828 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.725882 kubelet[2509]: E0905 00:10:38.725836 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.726058 kubelet[2509]: E0905 00:10:38.726043 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.726058 kubelet[2509]: W0905 00:10:38.726054 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.726099 kubelet[2509]: E0905 00:10:38.726062 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.726373 kubelet[2509]: E0905 00:10:38.726350 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.726373 kubelet[2509]: W0905 00:10:38.726366 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.726435 kubelet[2509]: E0905 00:10:38.726378 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.726652 kubelet[2509]: E0905 00:10:38.726631 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.726652 kubelet[2509]: W0905 00:10:38.726645 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.726701 kubelet[2509]: E0905 00:10:38.726654 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.726860 kubelet[2509]: E0905 00:10:38.726848 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.726860 kubelet[2509]: W0905 00:10:38.726857 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.726906 kubelet[2509]: E0905 00:10:38.726865 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.727071 kubelet[2509]: E0905 00:10:38.727059 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.727071 kubelet[2509]: W0905 00:10:38.727069 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.727111 kubelet[2509]: E0905 00:10:38.727077 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.727303 kubelet[2509]: E0905 00:10:38.727291 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.727303 kubelet[2509]: W0905 00:10:38.727301 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.727372 kubelet[2509]: E0905 00:10:38.727309 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.727647 kubelet[2509]: E0905 00:10:38.727629 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.727647 kubelet[2509]: W0905 00:10:38.727644 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.727702 kubelet[2509]: E0905 00:10:38.727655 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:38.727882 kubelet[2509]: E0905 00:10:38.727867 2509 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:10:38.727882 kubelet[2509]: W0905 00:10:38.727878 2509 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:10:38.727949 kubelet[2509]: E0905 00:10:38.727888 2509 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:10:39.479824 containerd[1460]: time="2025-09-05T00:10:39.479770799Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:39.480639 containerd[1460]: time="2025-09-05T00:10:39.480603906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 5 00:10:39.482060 containerd[1460]: time="2025-09-05T00:10:39.482031412Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:39.484367 containerd[1460]: time="2025-09-05T00:10:39.484340647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:39.484959 containerd[1460]: time="2025-09-05T00:10:39.484911741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 3.948325743s" Sep 5 00:10:39.485012 containerd[1460]: time="2025-09-05T00:10:39.484969089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 5 00:10:39.502987 containerd[1460]: time="2025-09-05T00:10:39.502944336Z" level=info msg="CreateContainer within sandbox \"4df63a877e303c89c04e7f536d03814e2a6d94e20c732b43e7f99581c7e15b13\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:10:39.518466 containerd[1460]: time="2025-09-05T00:10:39.518398441Z" level=info msg="CreateContainer within sandbox \"4df63a877e303c89c04e7f536d03814e2a6d94e20c732b43e7f99581c7e15b13\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3eb53d50d39a608f091bb03abb40a4b3754c57ab5e49af82834fde6be42330c9\"" Sep 5 00:10:39.518956 containerd[1460]: time="2025-09-05T00:10:39.518917297Z" level=info msg="StartContainer for \"3eb53d50d39a608f091bb03abb40a4b3754c57ab5e49af82834fde6be42330c9\"" Sep 5 00:10:39.553573 systemd[1]: Started cri-containerd-3eb53d50d39a608f091bb03abb40a4b3754c57ab5e49af82834fde6be42330c9.scope - libcontainer container 3eb53d50d39a608f091bb03abb40a4b3754c57ab5e49af82834fde6be42330c9. Sep 5 00:10:39.581257 kubelet[2509]: E0905 00:10:39.581177 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:39.603406 containerd[1460]: time="2025-09-05T00:10:39.603361945Z" level=info msg="StartContainer for \"3eb53d50d39a608f091bb03abb40a4b3754c57ab5e49af82834fde6be42330c9\" returns successfully" Sep 5 00:10:39.605019 systemd[1]: cri-containerd-3eb53d50d39a608f091bb03abb40a4b3754c57ab5e49af82834fde6be42330c9.scope: Deactivated successfully. Sep 5 00:10:39.629191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3eb53d50d39a608f091bb03abb40a4b3754c57ab5e49af82834fde6be42330c9-rootfs.mount: Deactivated successfully. Sep 5 00:10:39.992445 containerd[1460]: time="2025-09-05T00:10:39.992345575Z" level=info msg="shim disconnected" id=3eb53d50d39a608f091bb03abb40a4b3754c57ab5e49af82834fde6be42330c9 namespace=k8s.io Sep 5 00:10:39.992445 containerd[1460]: time="2025-09-05T00:10:39.992446083Z" level=warning msg="cleaning up after shim disconnected" id=3eb53d50d39a608f091bb03abb40a4b3754c57ab5e49af82834fde6be42330c9 namespace=k8s.io Sep 5 00:10:39.992445 containerd[1460]: time="2025-09-05T00:10:39.992458827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:10:40.645565 containerd[1460]: time="2025-09-05T00:10:40.645504760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:10:41.566227 kubelet[2509]: E0905 00:10:41.564332 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:43.563517 kubelet[2509]: E0905 00:10:43.563460 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:45.564281 kubelet[2509]: E0905 00:10:45.564025 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:45.841895 containerd[1460]: time="2025-09-05T00:10:45.841740032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:45.933576 containerd[1460]: time="2025-09-05T00:10:45.933494476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 5 00:10:46.103175 containerd[1460]: time="2025-09-05T00:10:46.103040756Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:46.238246 containerd[1460]: time="2025-09-05T00:10:46.238190698Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:10:46.239331 containerd[1460]: time="2025-09-05T00:10:46.239289151Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.59372584s" Sep 5 00:10:46.239331 containerd[1460]: time="2025-09-05T00:10:46.239329517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 5 00:10:46.383510 containerd[1460]: time="2025-09-05T00:10:46.383383499Z" level=info msg="CreateContainer within sandbox \"4df63a877e303c89c04e7f536d03814e2a6d94e20c732b43e7f99581c7e15b13\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:10:47.563783 kubelet[2509]: E0905 00:10:47.563724 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:48.060549 containerd[1460]: time="2025-09-05T00:10:48.060497960Z" level=info msg="CreateContainer within sandbox \"4df63a877e303c89c04e7f536d03814e2a6d94e20c732b43e7f99581c7e15b13\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e55a20fd9fdb15dcccfddedfb681f3aaef188cfe8cfc483de1c0fed36c860b6f\"" Sep 5 00:10:48.061027 containerd[1460]: time="2025-09-05T00:10:48.060936815Z" level=info msg="StartContainer for \"e55a20fd9fdb15dcccfddedfb681f3aaef188cfe8cfc483de1c0fed36c860b6f\"" Sep 5 00:10:48.094573 systemd[1]: Started cri-containerd-e55a20fd9fdb15dcccfddedfb681f3aaef188cfe8cfc483de1c0fed36c860b6f.scope - libcontainer container e55a20fd9fdb15dcccfddedfb681f3aaef188cfe8cfc483de1c0fed36c860b6f. Sep 5 00:10:50.495924 containerd[1460]: time="2025-09-05T00:10:50.495874018Z" level=info msg="StartContainer for \"e55a20fd9fdb15dcccfddedfb681f3aaef188cfe8cfc483de1c0fed36c860b6f\" returns successfully" Sep 5 00:10:50.496506 kubelet[2509]: E0905 00:10:50.496375 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:50.555104 systemd[1]: Started sshd@7-10.0.0.52:22-10.0.0.1:47804.service - OpenSSH per-connection server daemon (10.0.0.1:47804). Sep 5 00:10:50.587871 sshd[3300]: Accepted publickey for core from 10.0.0.1 port 47804 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:10:50.589765 sshd[3300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:50.596641 systemd-logind[1443]: New session 8 of user core. Sep 5 00:10:50.602635 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:10:50.743302 sshd[3300]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:50.747587 systemd[1]: sshd@7-10.0.0.52:22-10.0.0.1:47804.service: Deactivated successfully. Sep 5 00:10:50.749833 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:10:50.750489 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:10:50.751617 systemd-logind[1443]: Removed session 8. Sep 5 00:10:51.187360 systemd[1]: cri-containerd-e55a20fd9fdb15dcccfddedfb681f3aaef188cfe8cfc483de1c0fed36c860b6f.scope: Deactivated successfully. Sep 5 00:10:51.201155 kubelet[2509]: I0905 00:10:51.201115 2509 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:10:51.223085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e55a20fd9fdb15dcccfddedfb681f3aaef188cfe8cfc483de1c0fed36c860b6f-rootfs.mount: Deactivated successfully. Sep 5 00:10:51.228763 containerd[1460]: time="2025-09-05T00:10:51.228678163Z" level=info msg="shim disconnected" id=e55a20fd9fdb15dcccfddedfb681f3aaef188cfe8cfc483de1c0fed36c860b6f namespace=k8s.io Sep 5 00:10:51.228763 containerd[1460]: time="2025-09-05T00:10:51.228753505Z" level=warning msg="cleaning up after shim disconnected" id=e55a20fd9fdb15dcccfddedfb681f3aaef188cfe8cfc483de1c0fed36c860b6f namespace=k8s.io Sep 5 00:10:51.228763 containerd[1460]: time="2025-09-05T00:10:51.228765508Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:10:51.237974 systemd[1]: Created slice kubepods-burstable-pod1fa28034_a693_41d8_9eae_06ce071ba306.slice - libcontainer container kubepods-burstable-pod1fa28034_a693_41d8_9eae_06ce071ba306.slice. Sep 5 00:10:51.251543 systemd[1]: Created slice kubepods-besteffort-pod44b99312_546c_47bf_b6a7_75de1f36f388.slice - libcontainer container kubepods-besteffort-pod44b99312_546c_47bf_b6a7_75de1f36f388.slice. Sep 5 00:10:51.263075 systemd[1]: Created slice kubepods-burstable-podc644f92f_3ab8_4f91_9628_5d21f3b334b2.slice - libcontainer container kubepods-burstable-podc644f92f_3ab8_4f91_9628_5d21f3b334b2.slice. Sep 5 00:10:51.271920 systemd[1]: Created slice kubepods-besteffort-podb497f3e8_1406_4da9_8e71_b2f813307a42.slice - libcontainer container kubepods-besteffort-podb497f3e8_1406_4da9_8e71_b2f813307a42.slice. Sep 5 00:10:51.281360 systemd[1]: Created slice kubepods-besteffort-pod6dd17f3b_8a0c_44c8_8301_94b73eeeab5f.slice - libcontainer container kubepods-besteffort-pod6dd17f3b_8a0c_44c8_8301_94b73eeeab5f.slice. Sep 5 00:10:51.287440 systemd[1]: Created slice kubepods-besteffort-podcb54d995_57fe_449c_b086_a027be7852e5.slice - libcontainer container kubepods-besteffort-podcb54d995_57fe_449c_b086_a027be7852e5.slice. Sep 5 00:10:51.293147 systemd[1]: Created slice kubepods-besteffort-pod3931ff65_c111_474a_bf9a_aefaf26362d5.slice - libcontainer container kubepods-besteffort-pod3931ff65_c111_474a_bf9a_aefaf26362d5.slice. Sep 5 00:10:51.304778 kubelet[2509]: I0905 00:10:51.304738 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb54d995-57fe-449c-b086-a027be7852e5-whisker-ca-bundle\") pod \"whisker-84f57bb86d-24ltq\" (UID: \"cb54d995-57fe-449c-b086-a027be7852e5\") " pod="calico-system/whisker-84f57bb86d-24ltq" Sep 5 00:10:51.304778 kubelet[2509]: I0905 00:10:51.304779 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j87vw\" (UniqueName: \"kubernetes.io/projected/1fa28034-a693-41d8-9eae-06ce071ba306-kube-api-access-j87vw\") pod \"coredns-674b8bbfcf-fmz4m\" (UID: \"1fa28034-a693-41d8-9eae-06ce071ba306\") " pod="kube-system/coredns-674b8bbfcf-fmz4m" Sep 5 00:10:51.304778 kubelet[2509]: I0905 00:10:51.304796 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1fa28034-a693-41d8-9eae-06ce071ba306-config-volume\") pod \"coredns-674b8bbfcf-fmz4m\" (UID: \"1fa28034-a693-41d8-9eae-06ce071ba306\") " pod="kube-system/coredns-674b8bbfcf-fmz4m" Sep 5 00:10:51.305039 kubelet[2509]: I0905 00:10:51.304835 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b497f3e8-1406-4da9-8e71-b2f813307a42-goldmane-key-pair\") pod \"goldmane-54d579b49d-477dw\" (UID: \"b497f3e8-1406-4da9-8e71-b2f813307a42\") " pod="calico-system/goldmane-54d579b49d-477dw" Sep 5 00:10:51.305039 kubelet[2509]: I0905 00:10:51.304851 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrg5h\" (UniqueName: \"kubernetes.io/projected/3931ff65-c111-474a-bf9a-aefaf26362d5-kube-api-access-mrg5h\") pod \"calico-apiserver-56f98cb9dd-wdp6j\" (UID: \"3931ff65-c111-474a-bf9a-aefaf26362d5\") " pod="calico-apiserver/calico-apiserver-56f98cb9dd-wdp6j" Sep 5 00:10:51.305039 kubelet[2509]: I0905 00:10:51.304872 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whfk8\" (UniqueName: \"kubernetes.io/projected/6dd17f3b-8a0c-44c8-8301-94b73eeeab5f-kube-api-access-whfk8\") pod \"calico-apiserver-56f98cb9dd-dggrz\" (UID: \"6dd17f3b-8a0c-44c8-8301-94b73eeeab5f\") " pod="calico-apiserver/calico-apiserver-56f98cb9dd-dggrz" Sep 5 00:10:51.305039 kubelet[2509]: I0905 00:10:51.304916 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b99312-546c-47bf-b6a7-75de1f36f388-tigera-ca-bundle\") pod \"calico-kube-controllers-5fb59d4ff4-m7trx\" (UID: \"44b99312-546c-47bf-b6a7-75de1f36f388\") " pod="calico-system/calico-kube-controllers-5fb59d4ff4-m7trx" Sep 5 00:10:51.305039 kubelet[2509]: I0905 00:10:51.304930 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c644f92f-3ab8-4f91-9628-5d21f3b334b2-config-volume\") pod \"coredns-674b8bbfcf-fnhh4\" (UID: \"c644f92f-3ab8-4f91-9628-5d21f3b334b2\") " pod="kube-system/coredns-674b8bbfcf-fnhh4" Sep 5 00:10:51.305174 kubelet[2509]: I0905 00:10:51.304945 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b497f3e8-1406-4da9-8e71-b2f813307a42-config\") pod \"goldmane-54d579b49d-477dw\" (UID: \"b497f3e8-1406-4da9-8e71-b2f813307a42\") " pod="calico-system/goldmane-54d579b49d-477dw" Sep 5 00:10:51.305174 kubelet[2509]: I0905 00:10:51.304961 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6stc\" (UniqueName: \"kubernetes.io/projected/b497f3e8-1406-4da9-8e71-b2f813307a42-kube-api-access-j6stc\") pod \"goldmane-54d579b49d-477dw\" (UID: \"b497f3e8-1406-4da9-8e71-b2f813307a42\") " pod="calico-system/goldmane-54d579b49d-477dw" Sep 5 00:10:51.305174 kubelet[2509]: I0905 00:10:51.304975 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3931ff65-c111-474a-bf9a-aefaf26362d5-calico-apiserver-certs\") pod \"calico-apiserver-56f98cb9dd-wdp6j\" (UID: \"3931ff65-c111-474a-bf9a-aefaf26362d5\") " pod="calico-apiserver/calico-apiserver-56f98cb9dd-wdp6j" Sep 5 00:10:51.305174 kubelet[2509]: I0905 00:10:51.304989 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2gz6\" (UniqueName: \"kubernetes.io/projected/44b99312-546c-47bf-b6a7-75de1f36f388-kube-api-access-r2gz6\") pod \"calico-kube-controllers-5fb59d4ff4-m7trx\" (UID: \"44b99312-546c-47bf-b6a7-75de1f36f388\") " pod="calico-system/calico-kube-controllers-5fb59d4ff4-m7trx" Sep 5 00:10:51.305174 kubelet[2509]: I0905 00:10:51.305020 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj9mg\" (UniqueName: \"kubernetes.io/projected/c644f92f-3ab8-4f91-9628-5d21f3b334b2-kube-api-access-hj9mg\") pod \"coredns-674b8bbfcf-fnhh4\" (UID: \"c644f92f-3ab8-4f91-9628-5d21f3b334b2\") " pod="kube-system/coredns-674b8bbfcf-fnhh4" Sep 5 00:10:51.305325 kubelet[2509]: I0905 00:10:51.305062 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb54d995-57fe-449c-b086-a027be7852e5-whisker-backend-key-pair\") pod \"whisker-84f57bb86d-24ltq\" (UID: \"cb54d995-57fe-449c-b086-a027be7852e5\") " pod="calico-system/whisker-84f57bb86d-24ltq" Sep 5 00:10:51.305325 kubelet[2509]: I0905 00:10:51.305079 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swpqx\" (UniqueName: \"kubernetes.io/projected/cb54d995-57fe-449c-b086-a027be7852e5-kube-api-access-swpqx\") pod \"whisker-84f57bb86d-24ltq\" (UID: \"cb54d995-57fe-449c-b086-a027be7852e5\") " pod="calico-system/whisker-84f57bb86d-24ltq" Sep 5 00:10:51.305325 kubelet[2509]: I0905 00:10:51.305094 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b497f3e8-1406-4da9-8e71-b2f813307a42-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-477dw\" (UID: \"b497f3e8-1406-4da9-8e71-b2f813307a42\") " pod="calico-system/goldmane-54d579b49d-477dw" Sep 5 00:10:51.305325 kubelet[2509]: I0905 00:10:51.305108 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6dd17f3b-8a0c-44c8-8301-94b73eeeab5f-calico-apiserver-certs\") pod \"calico-apiserver-56f98cb9dd-dggrz\" (UID: \"6dd17f3b-8a0c-44c8-8301-94b73eeeab5f\") " pod="calico-apiserver/calico-apiserver-56f98cb9dd-dggrz" Sep 5 00:10:51.508826 containerd[1460]: time="2025-09-05T00:10:51.508470052Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:10:51.553560 kubelet[2509]: E0905 00:10:51.553517 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:51.554203 containerd[1460]: time="2025-09-05T00:10:51.554164961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fmz4m,Uid:1fa28034-a693-41d8-9eae-06ce071ba306,Namespace:kube-system,Attempt:0,}" Sep 5 00:10:51.557709 containerd[1460]: time="2025-09-05T00:10:51.557670134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb59d4ff4-m7trx,Uid:44b99312-546c-47bf-b6a7-75de1f36f388,Namespace:calico-system,Attempt:0,}" Sep 5 00:10:51.570971 kubelet[2509]: E0905 00:10:51.570846 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:51.571360 containerd[1460]: time="2025-09-05T00:10:51.571296604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fnhh4,Uid:c644f92f-3ab8-4f91-9628-5d21f3b334b2,Namespace:kube-system,Attempt:0,}" Sep 5 00:10:51.578403 containerd[1460]: time="2025-09-05T00:10:51.578354087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-477dw,Uid:b497f3e8-1406-4da9-8e71-b2f813307a42,Namespace:calico-system,Attempt:0,}" Sep 5 00:10:51.584976 containerd[1460]: time="2025-09-05T00:10:51.584742624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f98cb9dd-dggrz,Uid:6dd17f3b-8a0c-44c8-8301-94b73eeeab5f,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:10:51.592036 containerd[1460]: time="2025-09-05T00:10:51.591957513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84f57bb86d-24ltq,Uid:cb54d995-57fe-449c-b086-a027be7852e5,Namespace:calico-system,Attempt:0,}" Sep 5 00:10:51.601331 containerd[1460]: time="2025-09-05T00:10:51.601061801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f98cb9dd-wdp6j,Uid:3931ff65-c111-474a-bf9a-aefaf26362d5,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:10:51.735463 containerd[1460]: time="2025-09-05T00:10:51.735367576Z" level=error msg="Failed to destroy network for sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.737449 containerd[1460]: time="2025-09-05T00:10:51.737377631Z" level=error msg="Failed to destroy network for sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.742780 containerd[1460]: time="2025-09-05T00:10:51.742736333Z" level=error msg="encountered an error cleaning up failed sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.742938 containerd[1460]: time="2025-09-05T00:10:51.742914308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb59d4ff4-m7trx,Uid:44b99312-546c-47bf-b6a7-75de1f36f388,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.745142 containerd[1460]: time="2025-09-05T00:10:51.745112766Z" level=error msg="encountered an error cleaning up failed sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.745263 containerd[1460]: time="2025-09-05T00:10:51.745241519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fmz4m,Uid:1fa28034-a693-41d8-9eae-06ce071ba306,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.749443 containerd[1460]: time="2025-09-05T00:10:51.749283388Z" level=error msg="Failed to destroy network for sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.749867 containerd[1460]: time="2025-09-05T00:10:51.749842438Z" level=error msg="encountered an error cleaning up failed sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.750013 containerd[1460]: time="2025-09-05T00:10:51.749959919Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84f57bb86d-24ltq,Uid:cb54d995-57fe-449c-b086-a027be7852e5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.750271 containerd[1460]: time="2025-09-05T00:10:51.750229245Z" level=error msg="Failed to destroy network for sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.750575 containerd[1460]: time="2025-09-05T00:10:51.750550768Z" level=error msg="encountered an error cleaning up failed sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.750629 containerd[1460]: time="2025-09-05T00:10:51.750591805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-477dw,Uid:b497f3e8-1406-4da9-8e71-b2f813307a42,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.759532 kubelet[2509]: E0905 00:10:51.759013 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.759532 kubelet[2509]: E0905 00:10:51.759029 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.759532 kubelet[2509]: E0905 00:10:51.759077 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.759532 kubelet[2509]: E0905 00:10:51.759106 2509 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-477dw" Sep 5 00:10:51.759780 kubelet[2509]: E0905 00:10:51.759125 2509 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fmz4m" Sep 5 00:10:51.759780 kubelet[2509]: E0905 00:10:51.759134 2509 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-477dw" Sep 5 00:10:51.759780 kubelet[2509]: E0905 00:10:51.759150 2509 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fmz4m" Sep 5 00:10:51.759780 kubelet[2509]: E0905 00:10:51.759125 2509 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fb59d4ff4-m7trx" Sep 5 00:10:51.759885 kubelet[2509]: E0905 00:10:51.759208 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fmz4m_kube-system(1fa28034-a693-41d8-9eae-06ce071ba306)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fmz4m_kube-system(1fa28034-a693-41d8-9eae-06ce071ba306)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fmz4m" podUID="1fa28034-a693-41d8-9eae-06ce071ba306" Sep 5 00:10:51.759885 kubelet[2509]: E0905 00:10:51.759239 2509 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fb59d4ff4-m7trx" Sep 5 00:10:51.759885 kubelet[2509]: E0905 00:10:51.759276 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.759998 kubelet[2509]: E0905 00:10:51.759302 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fb59d4ff4-m7trx_calico-system(44b99312-546c-47bf-b6a7-75de1f36f388)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fb59d4ff4-m7trx_calico-system(44b99312-546c-47bf-b6a7-75de1f36f388)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fb59d4ff4-m7trx" podUID="44b99312-546c-47bf-b6a7-75de1f36f388" Sep 5 00:10:51.759998 kubelet[2509]: E0905 00:10:51.759195 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-477dw_calico-system(b497f3e8-1406-4da9-8e71-b2f813307a42)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-477dw_calico-system(b497f3e8-1406-4da9-8e71-b2f813307a42)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-477dw" podUID="b497f3e8-1406-4da9-8e71-b2f813307a42" Sep 5 00:10:51.759998 kubelet[2509]: E0905 00:10:51.759327 2509 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84f57bb86d-24ltq" Sep 5 00:10:51.760107 kubelet[2509]: E0905 00:10:51.759344 2509 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84f57bb86d-24ltq" Sep 5 00:10:51.760107 kubelet[2509]: E0905 00:10:51.759381 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84f57bb86d-24ltq_calico-system(cb54d995-57fe-449c-b086-a027be7852e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84f57bb86d-24ltq_calico-system(cb54d995-57fe-449c-b086-a027be7852e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84f57bb86d-24ltq" podUID="cb54d995-57fe-449c-b086-a027be7852e5" Sep 5 00:10:51.767154 containerd[1460]: time="2025-09-05T00:10:51.767106770Z" level=error msg="Failed to destroy network for sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.767821 containerd[1460]: time="2025-09-05T00:10:51.767795864Z" level=error msg="encountered an error cleaning up failed sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.767918 containerd[1460]: time="2025-09-05T00:10:51.767897885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fnhh4,Uid:c644f92f-3ab8-4f91-9628-5d21f3b334b2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.768179 kubelet[2509]: E0905 00:10:51.768151 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.768302 kubelet[2509]: E0905 00:10:51.768283 2509 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fnhh4" Sep 5 00:10:51.768442 kubelet[2509]: E0905 00:10:51.768376 2509 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fnhh4" Sep 5 00:10:51.768584 kubelet[2509]: E0905 00:10:51.768543 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fnhh4_kube-system(c644f92f-3ab8-4f91-9628-5d21f3b334b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fnhh4_kube-system(c644f92f-3ab8-4f91-9628-5d21f3b334b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fnhh4" podUID="c644f92f-3ab8-4f91-9628-5d21f3b334b2" Sep 5 00:10:51.783029 containerd[1460]: time="2025-09-05T00:10:51.782978756Z" level=error msg="Failed to destroy network for sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.783345 containerd[1460]: time="2025-09-05T00:10:51.783317472Z" level=error msg="encountered an error cleaning up failed sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.783391 containerd[1460]: time="2025-09-05T00:10:51.783368137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f98cb9dd-wdp6j,Uid:3931ff65-c111-474a-bf9a-aefaf26362d5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.783607 kubelet[2509]: E0905 00:10:51.783572 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.783663 kubelet[2509]: E0905 00:10:51.783607 2509 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56f98cb9dd-wdp6j" Sep 5 00:10:51.783663 kubelet[2509]: E0905 00:10:51.783623 2509 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56f98cb9dd-wdp6j" Sep 5 00:10:51.783713 kubelet[2509]: E0905 00:10:51.783661 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56f98cb9dd-wdp6j_calico-apiserver(3931ff65-c111-474a-bf9a-aefaf26362d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56f98cb9dd-wdp6j_calico-apiserver(3931ff65-c111-474a-bf9a-aefaf26362d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56f98cb9dd-wdp6j" podUID="3931ff65-c111-474a-bf9a-aefaf26362d5" Sep 5 00:10:51.789299 containerd[1460]: time="2025-09-05T00:10:51.789248550Z" level=error msg="Failed to destroy network for sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.789666 containerd[1460]: time="2025-09-05T00:10:51.789637962Z" level=error msg="encountered an error cleaning up failed sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.789713 containerd[1460]: time="2025-09-05T00:10:51.789690971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f98cb9dd-dggrz,Uid:6dd17f3b-8a0c-44c8-8301-94b73eeeab5f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.789944 kubelet[2509]: E0905 00:10:51.789901 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:51.789991 kubelet[2509]: E0905 00:10:51.789971 2509 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56f98cb9dd-dggrz" Sep 5 00:10:51.790026 kubelet[2509]: E0905 00:10:51.790000 2509 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56f98cb9dd-dggrz" Sep 5 00:10:51.790090 kubelet[2509]: E0905 00:10:51.790054 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56f98cb9dd-dggrz_calico-apiserver(6dd17f3b-8a0c-44c8-8301-94b73eeeab5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56f98cb9dd-dggrz_calico-apiserver(6dd17f3b-8a0c-44c8-8301-94b73eeeab5f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56f98cb9dd-dggrz" podUID="6dd17f3b-8a0c-44c8-8301-94b73eeeab5f" Sep 5 00:10:52.509754 kubelet[2509]: I0905 00:10:52.509718 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:10:52.510669 kubelet[2509]: I0905 00:10:52.510651 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:10:52.513446 kubelet[2509]: I0905 00:10:52.513403 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:10:52.514337 kubelet[2509]: I0905 00:10:52.514306 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:10:52.525413 containerd[1460]: time="2025-09-05T00:10:52.525357950Z" level=info msg="StopPodSandbox for \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\"" Sep 5 00:10:52.527080 containerd[1460]: time="2025-09-05T00:10:52.527038765Z" level=info msg="StopPodSandbox for \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\"" Sep 5 00:10:52.527921 containerd[1460]: time="2025-09-05T00:10:52.527891858Z" level=info msg="StopPodSandbox for \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\"" Sep 5 00:10:52.529090 containerd[1460]: time="2025-09-05T00:10:52.529052307Z" level=info msg="Ensure that sandbox 09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99 in task-service has been cleanup successfully" Sep 5 00:10:52.529130 containerd[1460]: time="2025-09-05T00:10:52.529056695Z" level=info msg="Ensure that sandbox d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9 in task-service has been cleanup successfully" Sep 5 00:10:52.529394 containerd[1460]: time="2025-09-05T00:10:52.529059961Z" level=info msg="Ensure that sandbox 4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1 in task-service has been cleanup successfully" Sep 5 00:10:52.537400 containerd[1460]: time="2025-09-05T00:10:52.529093724Z" level=info msg="StopPodSandbox for \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\"" Sep 5 00:10:52.537589 containerd[1460]: time="2025-09-05T00:10:52.537569840Z" level=info msg="Ensure that sandbox 16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281 in task-service has been cleanup successfully" Sep 5 00:10:52.541272 kubelet[2509]: I0905 00:10:52.541244 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:10:52.542816 containerd[1460]: time="2025-09-05T00:10:52.542789871Z" level=info msg="StopPodSandbox for \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\"" Sep 5 00:10:52.543938 containerd[1460]: time="2025-09-05T00:10:52.543792134Z" level=info msg="Ensure that sandbox 4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471 in task-service has been cleanup successfully" Sep 5 00:10:52.544543 kubelet[2509]: I0905 00:10:52.544152 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:10:52.544731 containerd[1460]: time="2025-09-05T00:10:52.544711410Z" level=info msg="StopPodSandbox for \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\"" Sep 5 00:10:52.545765 containerd[1460]: time="2025-09-05T00:10:52.545631778Z" level=info msg="Ensure that sandbox 3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059 in task-service has been cleanup successfully" Sep 5 00:10:52.547233 kubelet[2509]: I0905 00:10:52.547166 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:10:52.548765 containerd[1460]: time="2025-09-05T00:10:52.548733102Z" level=info msg="StopPodSandbox for \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\"" Sep 5 00:10:52.549328 containerd[1460]: time="2025-09-05T00:10:52.549073722Z" level=info msg="Ensure that sandbox 7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508 in task-service has been cleanup successfully" Sep 5 00:10:52.571582 systemd[1]: Created slice kubepods-besteffort-podd4aa0d59_f65d_4bd5_953e_3a3464571ba3.slice - libcontainer container kubepods-besteffort-podd4aa0d59_f65d_4bd5_953e_3a3464571ba3.slice. Sep 5 00:10:52.574975 containerd[1460]: time="2025-09-05T00:10:52.574922008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54j5k,Uid:d4aa0d59-f65d-4bd5-953e-3a3464571ba3,Namespace:calico-system,Attempt:0,}" Sep 5 00:10:52.582732 containerd[1460]: time="2025-09-05T00:10:52.582667231Z" level=error msg="StopPodSandbox for \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\" failed" error="failed to destroy network for sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:52.582996 kubelet[2509]: E0905 00:10:52.582948 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:10:52.583352 kubelet[2509]: E0905 00:10:52.583027 2509 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1"} Sep 5 00:10:52.583352 kubelet[2509]: E0905 00:10:52.583085 2509 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb54d995-57fe-449c-b086-a027be7852e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:10:52.583352 kubelet[2509]: E0905 00:10:52.583110 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb54d995-57fe-449c-b086-a027be7852e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84f57bb86d-24ltq" podUID="cb54d995-57fe-449c-b086-a027be7852e5" Sep 5 00:10:52.608361 containerd[1460]: time="2025-09-05T00:10:52.607829590Z" level=error msg="StopPodSandbox for \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\" failed" error="failed to destroy network for sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:52.608498 kubelet[2509]: E0905 00:10:52.608083 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:10:52.608498 kubelet[2509]: E0905 00:10:52.608140 2509 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99"} Sep 5 00:10:52.608498 kubelet[2509]: E0905 00:10:52.608177 2509 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"44b99312-546c-47bf-b6a7-75de1f36f388\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:10:52.608498 kubelet[2509]: E0905 00:10:52.608211 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"44b99312-546c-47bf-b6a7-75de1f36f388\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fb59d4ff4-m7trx" podUID="44b99312-546c-47bf-b6a7-75de1f36f388" Sep 5 00:10:52.613698 containerd[1460]: time="2025-09-05T00:10:52.613648566Z" level=error msg="StopPodSandbox for \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\" failed" error="failed to destroy network for sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:52.613919 kubelet[2509]: E0905 00:10:52.613872 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:10:52.613919 kubelet[2509]: E0905 00:10:52.613920 2509 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059"} Sep 5 00:10:52.614117 kubelet[2509]: E0905 00:10:52.613954 2509 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b497f3e8-1406-4da9-8e71-b2f813307a42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:10:52.614117 kubelet[2509]: E0905 00:10:52.613976 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b497f3e8-1406-4da9-8e71-b2f813307a42\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-477dw" podUID="b497f3e8-1406-4da9-8e71-b2f813307a42" Sep 5 00:10:52.614721 containerd[1460]: time="2025-09-05T00:10:52.614667890Z" level=error msg="StopPodSandbox for \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\" failed" error="failed to destroy network for sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:52.614905 kubelet[2509]: E0905 00:10:52.614877 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:10:52.615052 kubelet[2509]: E0905 00:10:52.614906 2509 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508"} Sep 5 00:10:52.615052 kubelet[2509]: E0905 00:10:52.614952 2509 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c644f92f-3ab8-4f91-9628-5d21f3b334b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:10:52.615052 kubelet[2509]: E0905 00:10:52.614972 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c644f92f-3ab8-4f91-9628-5d21f3b334b2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fnhh4" podUID="c644f92f-3ab8-4f91-9628-5d21f3b334b2" Sep 5 00:10:52.616910 containerd[1460]: time="2025-09-05T00:10:52.616523795Z" level=error msg="StopPodSandbox for \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\" failed" error="failed to destroy network for sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:52.616965 kubelet[2509]: E0905 00:10:52.616645 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:10:52.616965 kubelet[2509]: E0905 00:10:52.616668 2509 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9"} Sep 5 00:10:52.616965 kubelet[2509]: E0905 00:10:52.616689 2509 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6dd17f3b-8a0c-44c8-8301-94b73eeeab5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:10:52.616965 kubelet[2509]: E0905 00:10:52.616706 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6dd17f3b-8a0c-44c8-8301-94b73eeeab5f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56f98cb9dd-dggrz" podUID="6dd17f3b-8a0c-44c8-8301-94b73eeeab5f" Sep 5 00:10:52.620081 containerd[1460]: time="2025-09-05T00:10:52.619543856Z" level=error msg="StopPodSandbox for \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\" failed" error="failed to destroy network for sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:52.620154 kubelet[2509]: E0905 00:10:52.619766 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:10:52.620154 kubelet[2509]: E0905 00:10:52.619817 2509 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281"} Sep 5 00:10:52.620154 kubelet[2509]: E0905 00:10:52.619857 2509 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1fa28034-a693-41d8-9eae-06ce071ba306\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:10:52.620154 kubelet[2509]: E0905 00:10:52.619877 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1fa28034-a693-41d8-9eae-06ce071ba306\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fmz4m" podUID="1fa28034-a693-41d8-9eae-06ce071ba306" Sep 5 00:10:52.623045 containerd[1460]: time="2025-09-05T00:10:52.622991840Z" level=error msg="StopPodSandbox for \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\" failed" error="failed to destroy network for sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:52.623174 kubelet[2509]: E0905 00:10:52.623144 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:10:52.623228 kubelet[2509]: E0905 00:10:52.623178 2509 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471"} Sep 5 00:10:52.623228 kubelet[2509]: E0905 00:10:52.623208 2509 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3931ff65-c111-474a-bf9a-aefaf26362d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:10:52.623312 kubelet[2509]: E0905 00:10:52.623236 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3931ff65-c111-474a-bf9a-aefaf26362d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56f98cb9dd-wdp6j" podUID="3931ff65-c111-474a-bf9a-aefaf26362d5" Sep 5 00:10:52.657785 containerd[1460]: time="2025-09-05T00:10:52.657718758Z" level=error msg="Failed to destroy network for sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:52.658261 containerd[1460]: time="2025-09-05T00:10:52.658231461Z" level=error msg="encountered an error cleaning up failed sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:52.658326 containerd[1460]: time="2025-09-05T00:10:52.658296503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54j5k,Uid:d4aa0d59-f65d-4bd5-953e-3a3464571ba3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:52.658718 kubelet[2509]: E0905 00:10:52.658679 2509 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:52.658798 kubelet[2509]: E0905 00:10:52.658745 2509 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-54j5k" Sep 5 00:10:52.658798 kubelet[2509]: E0905 00:10:52.658774 2509 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-54j5k" Sep 5 00:10:52.658999 kubelet[2509]: E0905 00:10:52.658828 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-54j5k_calico-system(d4aa0d59-f65d-4bd5-953e-3a3464571ba3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-54j5k_calico-system(d4aa0d59-f65d-4bd5-953e-3a3464571ba3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:52.661040 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29-shm.mount: Deactivated successfully. Sep 5 00:10:53.550217 kubelet[2509]: I0905 00:10:53.550158 2509 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:10:53.550819 containerd[1460]: time="2025-09-05T00:10:53.550780339Z" level=info msg="StopPodSandbox for \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\"" Sep 5 00:10:53.551471 containerd[1460]: time="2025-09-05T00:10:53.551445609Z" level=info msg="Ensure that sandbox 28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29 in task-service has been cleanup successfully" Sep 5 00:10:53.582073 containerd[1460]: time="2025-09-05T00:10:53.581994627Z" level=error msg="StopPodSandbox for \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\" failed" error="failed to destroy network for sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:10:53.582326 kubelet[2509]: E0905 00:10:53.582275 2509 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:10:53.582393 kubelet[2509]: E0905 00:10:53.582340 2509 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29"} Sep 5 00:10:53.582393 kubelet[2509]: E0905 00:10:53.582381 2509 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4aa0d59-f65d-4bd5-953e-3a3464571ba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:10:53.582599 kubelet[2509]: E0905 00:10:53.582415 2509 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4aa0d59-f65d-4bd5-953e-3a3464571ba3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-54j5k" podUID="d4aa0d59-f65d-4bd5-953e-3a3464571ba3" Sep 5 00:10:55.758741 systemd[1]: Started sshd@8-10.0.0.52:22-10.0.0.1:47810.service - OpenSSH per-connection server daemon (10.0.0.1:47810). Sep 5 00:10:55.799544 sshd[3757]: Accepted publickey for core from 10.0.0.1 port 47810 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:10:55.801508 sshd[3757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:10:55.806369 systemd-logind[1443]: New session 9 of user core. Sep 5 00:10:55.817636 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:10:55.962510 sshd[3757]: pam_unix(sshd:session): session closed for user core Sep 5 00:10:55.969287 systemd[1]: sshd@8-10.0.0.52:22-10.0.0.1:47810.service: Deactivated successfully. Sep 5 00:10:55.974616 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:10:55.975372 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:10:55.976562 systemd-logind[1443]: Removed session 9. Sep 5 00:10:57.456542 kubelet[2509]: I0905 00:10:57.455756 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:10:57.456542 kubelet[2509]: E0905 00:10:57.456200 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:10:57.560358 kubelet[2509]: E0905 00:10:57.560314 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:00.635238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273887119.mount: Deactivated successfully. Sep 5 00:11:00.973534 systemd[1]: Started sshd@9-10.0.0.52:22-10.0.0.1:37810.service - OpenSSH per-connection server daemon (10.0.0.1:37810). Sep 5 00:11:02.289454 sshd[3780]: Accepted publickey for core from 10.0.0.1 port 37810 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:02.291539 sshd[3780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:02.297086 systemd-logind[1443]: New session 10 of user core. Sep 5 00:11:02.302593 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:11:02.523809 sshd[3780]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:02.528389 systemd[1]: sshd@9-10.0.0.52:22-10.0.0.1:37810.service: Deactivated successfully. Sep 5 00:11:02.530677 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:11:02.531371 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:11:02.532323 systemd-logind[1443]: Removed session 10. Sep 5 00:11:02.589661 containerd[1460]: time="2025-09-05T00:11:02.589553295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:02.591970 containerd[1460]: time="2025-09-05T00:11:02.590648209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 5 00:11:02.592201 containerd[1460]: time="2025-09-05T00:11:02.592148865Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:02.645211 containerd[1460]: time="2025-09-05T00:11:02.645166079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:02.645747 containerd[1460]: time="2025-09-05T00:11:02.645715951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 11.137206355s" Sep 5 00:11:02.645792 containerd[1460]: time="2025-09-05T00:11:02.645747851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 5 00:11:02.656400 containerd[1460]: time="2025-09-05T00:11:02.656348913Z" level=info msg="CreateContainer within sandbox \"4df63a877e303c89c04e7f536d03814e2a6d94e20c732b43e7f99581c7e15b13\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:11:02.684555 containerd[1460]: time="2025-09-05T00:11:02.684497662Z" level=info msg="CreateContainer within sandbox \"4df63a877e303c89c04e7f536d03814e2a6d94e20c732b43e7f99581c7e15b13\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e7ae54405d1be9cc6a41b65bd1058412f19b148ba74059441d9b12d46df269ac\"" Sep 5 00:11:02.685374 containerd[1460]: time="2025-09-05T00:11:02.685311049Z" level=info msg="StartContainer for \"e7ae54405d1be9cc6a41b65bd1058412f19b148ba74059441d9b12d46df269ac\"" Sep 5 00:11:02.739582 systemd[1]: Started cri-containerd-e7ae54405d1be9cc6a41b65bd1058412f19b148ba74059441d9b12d46df269ac.scope - libcontainer container e7ae54405d1be9cc6a41b65bd1058412f19b148ba74059441d9b12d46df269ac. Sep 5 00:11:02.775827 containerd[1460]: time="2025-09-05T00:11:02.775777503Z" level=info msg="StartContainer for \"e7ae54405d1be9cc6a41b65bd1058412f19b148ba74059441d9b12d46df269ac\" returns successfully" Sep 5 00:11:02.884563 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:11:02.884754 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:11:02.986664 containerd[1460]: time="2025-09-05T00:11:02.986622209Z" level=info msg="StopPodSandbox for \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\"" Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.049 [INFO][3865] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.050 [INFO][3865] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" iface="eth0" netns="/var/run/netns/cni-1b8fc807-2130-c878-07c4-f03efb201ffd" Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.051 [INFO][3865] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" iface="eth0" netns="/var/run/netns/cni-1b8fc807-2130-c878-07c4-f03efb201ffd" Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.054 [INFO][3865] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" iface="eth0" netns="/var/run/netns/cni-1b8fc807-2130-c878-07c4-f03efb201ffd" Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.054 [INFO][3865] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.054 [INFO][3865] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.121 [INFO][3874] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" HandleID="k8s-pod-network.4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Workload="localhost-k8s-whisker--84f57bb86d--24ltq-eth0" Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.121 [INFO][3874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.122 [INFO][3874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.535 [WARNING][3874] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" HandleID="k8s-pod-network.4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Workload="localhost-k8s-whisker--84f57bb86d--24ltq-eth0" Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.536 [INFO][3874] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" HandleID="k8s-pod-network.4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Workload="localhost-k8s-whisker--84f57bb86d--24ltq-eth0" Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.538 [INFO][3874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:03.545086 containerd[1460]: 2025-09-05 00:11:03.542 [INFO][3865] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:03.545649 containerd[1460]: time="2025-09-05T00:11:03.545259418Z" level=info msg="TearDown network for sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\" successfully" Sep 5 00:11:03.545649 containerd[1460]: time="2025-09-05T00:11:03.545289825Z" level=info msg="StopPodSandbox for \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\" returns successfully" Sep 5 00:11:03.564237 containerd[1460]: time="2025-09-05T00:11:03.564181271Z" level=info msg="StopPodSandbox for \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\"" Sep 5 00:11:03.577799 kubelet[2509]: I0905 00:11:03.577727 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swpqx\" (UniqueName: \"kubernetes.io/projected/cb54d995-57fe-449c-b086-a027be7852e5-kube-api-access-swpqx\") pod \"cb54d995-57fe-449c-b086-a027be7852e5\" (UID: \"cb54d995-57fe-449c-b086-a027be7852e5\") " Sep 5 00:11:03.579838 kubelet[2509]: I0905 00:11:03.577901 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb54d995-57fe-449c-b086-a027be7852e5-whisker-ca-bundle\") pod \"cb54d995-57fe-449c-b086-a027be7852e5\" (UID: \"cb54d995-57fe-449c-b086-a027be7852e5\") " Sep 5 00:11:03.579838 kubelet[2509]: I0905 00:11:03.578098 2509 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb54d995-57fe-449c-b086-a027be7852e5-whisker-backend-key-pair\") pod \"cb54d995-57fe-449c-b086-a027be7852e5\" (UID: \"cb54d995-57fe-449c-b086-a027be7852e5\") " Sep 5 00:11:03.584066 kubelet[2509]: I0905 00:11:03.583540 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb54d995-57fe-449c-b086-a027be7852e5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cb54d995-57fe-449c-b086-a027be7852e5" (UID: "cb54d995-57fe-449c-b086-a027be7852e5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:11:03.595629 kubelet[2509]: I0905 00:11:03.594930 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb54d995-57fe-449c-b086-a027be7852e5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cb54d995-57fe-449c-b086-a027be7852e5" (UID: "cb54d995-57fe-449c-b086-a027be7852e5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:11:03.595629 kubelet[2509]: I0905 00:11:03.595172 2509 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb54d995-57fe-449c-b086-a027be7852e5-kube-api-access-swpqx" (OuterVolumeSpecName: "kube-api-access-swpqx") pod "cb54d995-57fe-449c-b086-a027be7852e5" (UID: "cb54d995-57fe-449c-b086-a027be7852e5"). InnerVolumeSpecName "kube-api-access-swpqx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:11:03.611039 kubelet[2509]: I0905 00:11:03.610691 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8b82m" podStartSLOduration=2.219040002 podStartE2EDuration="34.610673256s" podCreationTimestamp="2025-09-05 00:10:29 +0000 UTC" firstStartedPulling="2025-09-05 00:10:30.254656182 +0000 UTC m=+18.788735760" lastFinishedPulling="2025-09-05 00:11:02.646289436 +0000 UTC m=+51.180369014" observedRunningTime="2025-09-05 00:11:03.608285797 +0000 UTC m=+52.142365375" watchObservedRunningTime="2025-09-05 00:11:03.610673256 +0000 UTC m=+52.144752824" Sep 5 00:11:03.651910 systemd[1]: run-netns-cni\x2d1b8fc807\x2d2130\x2dc878\x2d07c4\x2df03efb201ffd.mount: Deactivated successfully. Sep 5 00:11:03.652052 systemd[1]: var-lib-kubelet-pods-cb54d995\x2d57fe\x2d449c\x2db086\x2da027be7852e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dswpqx.mount: Deactivated successfully. Sep 5 00:11:03.652142 systemd[1]: var-lib-kubelet-pods-cb54d995\x2d57fe\x2d449c\x2db086\x2da027be7852e5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.623 [INFO][3892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.624 [INFO][3892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" iface="eth0" netns="/var/run/netns/cni-33985134-16b4-75ad-e262-0e451bc98851" Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.624 [INFO][3892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" iface="eth0" netns="/var/run/netns/cni-33985134-16b4-75ad-e262-0e451bc98851" Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.624 [INFO][3892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" iface="eth0" netns="/var/run/netns/cni-33985134-16b4-75ad-e262-0e451bc98851" Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.624 [INFO][3892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.624 [INFO][3892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.649 [INFO][3907] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" HandleID="k8s-pod-network.4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.649 [INFO][3907] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.649 [INFO][3907] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.657 [WARNING][3907] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" HandleID="k8s-pod-network.4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.657 [INFO][3907] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" HandleID="k8s-pod-network.4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.662 [INFO][3907] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:03.670532 containerd[1460]: 2025-09-05 00:11:03.665 [INFO][3892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:03.671224 containerd[1460]: time="2025-09-05T00:11:03.670919469Z" level=info msg="TearDown network for sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\" successfully" Sep 5 00:11:03.672445 containerd[1460]: time="2025-09-05T00:11:03.671539132Z" level=info msg="StopPodSandbox for \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\" returns successfully" Sep 5 00:11:03.674042 containerd[1460]: time="2025-09-05T00:11:03.674006501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f98cb9dd-wdp6j,Uid:3931ff65-c111-474a-bf9a-aefaf26362d5,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:11:03.674807 systemd[1]: run-netns-cni\x2d33985134\x2d16b4\x2d75ad\x2de262\x2d0e451bc98851.mount: Deactivated successfully. Sep 5 00:11:03.679241 kubelet[2509]: I0905 00:11:03.679210 2509 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cb54d995-57fe-449c-b086-a027be7852e5-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:03.679241 kubelet[2509]: I0905 00:11:03.679238 2509 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-swpqx\" (UniqueName: \"kubernetes.io/projected/cb54d995-57fe-449c-b086-a027be7852e5-kube-api-access-swpqx\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:03.679241 kubelet[2509]: I0905 00:11:03.679247 2509 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb54d995-57fe-449c-b086-a027be7852e5-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:11:03.883611 systemd[1]: Removed slice kubepods-besteffort-podcb54d995_57fe_449c_b086_a027be7852e5.slice - libcontainer container kubepods-besteffort-podcb54d995_57fe_449c_b086_a027be7852e5.slice. Sep 5 00:11:03.970508 systemd[1]: Created slice kubepods-besteffort-pod551351c8_20f5_49ed_adf6_5a0fdefea7e4.slice - libcontainer container kubepods-besteffort-pod551351c8_20f5_49ed_adf6_5a0fdefea7e4.slice. Sep 5 00:11:03.980806 kubelet[2509]: I0905 00:11:03.980759 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btfk2\" (UniqueName: \"kubernetes.io/projected/551351c8-20f5-49ed-adf6-5a0fdefea7e4-kube-api-access-btfk2\") pod \"whisker-7f4fc74d67-xp58z\" (UID: \"551351c8-20f5-49ed-adf6-5a0fdefea7e4\") " pod="calico-system/whisker-7f4fc74d67-xp58z" Sep 5 00:11:03.980973 kubelet[2509]: I0905 00:11:03.980845 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/551351c8-20f5-49ed-adf6-5a0fdefea7e4-whisker-ca-bundle\") pod \"whisker-7f4fc74d67-xp58z\" (UID: \"551351c8-20f5-49ed-adf6-5a0fdefea7e4\") " pod="calico-system/whisker-7f4fc74d67-xp58z" Sep 5 00:11:03.980973 kubelet[2509]: I0905 00:11:03.980896 2509 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/551351c8-20f5-49ed-adf6-5a0fdefea7e4-whisker-backend-key-pair\") pod \"whisker-7f4fc74d67-xp58z\" (UID: \"551351c8-20f5-49ed-adf6-5a0fdefea7e4\") " pod="calico-system/whisker-7f4fc74d67-xp58z" Sep 5 00:11:04.035511 systemd-networkd[1389]: calib2ac6dfad8c: Link UP Sep 5 00:11:04.035744 systemd-networkd[1389]: calib2ac6dfad8c: Gained carrier Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:03.955 [INFO][3944] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:03.968 [INFO][3944] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0 calico-apiserver-56f98cb9dd- calico-apiserver 3931ff65-c111-474a-bf9a-aefaf26362d5 1044 0 2025-09-05 00:10:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56f98cb9dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56f98cb9dd-wdp6j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib2ac6dfad8c [] [] }} ContainerID="466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-wdp6j" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:03.968 [INFO][3944] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-wdp6j" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:03.994 [INFO][3958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" HandleID="k8s-pod-network.466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:03.994 [INFO][3958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" HandleID="k8s-pod-network.466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56f98cb9dd-wdp6j", "timestamp":"2025-09-05 00:11:03.994317796 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:03.994 [INFO][3958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:03.994 [INFO][3958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:03.994 [INFO][3958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.000 [INFO][3958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" host="localhost" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.007 [INFO][3958] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.010 [INFO][3958] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.012 [INFO][3958] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.014 [INFO][3958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.014 [INFO][3958] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" host="localhost" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.015 [INFO][3958] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46 Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.018 [INFO][3958] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" host="localhost" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.022 [INFO][3958] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" host="localhost" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.022 [INFO][3958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" host="localhost" Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.022 [INFO][3958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:04.051653 containerd[1460]: 2025-09-05 00:11:04.022 [INFO][3958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" HandleID="k8s-pod-network.466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:04.052326 containerd[1460]: 2025-09-05 00:11:04.025 [INFO][3944] cni-plugin/k8s.go 418: Populated endpoint ContainerID="466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-wdp6j" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0", GenerateName:"calico-apiserver-56f98cb9dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3931ff65-c111-474a-bf9a-aefaf26362d5", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f98cb9dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56f98cb9dd-wdp6j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2ac6dfad8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:04.052326 containerd[1460]: 2025-09-05 00:11:04.026 [INFO][3944] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-wdp6j" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:04.052326 containerd[1460]: 2025-09-05 00:11:04.026 [INFO][3944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2ac6dfad8c ContainerID="466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-wdp6j" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:04.052326 containerd[1460]: 2025-09-05 00:11:04.034 [INFO][3944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-wdp6j" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:04.052326 containerd[1460]: 2025-09-05 00:11:04.035 [INFO][3944] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-wdp6j" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0", GenerateName:"calico-apiserver-56f98cb9dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3931ff65-c111-474a-bf9a-aefaf26362d5", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f98cb9dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46", Pod:"calico-apiserver-56f98cb9dd-wdp6j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2ac6dfad8c", MAC:"0e:53:a0:f0:e4:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:04.052326 containerd[1460]: 2025-09-05 00:11:04.048 [INFO][3944] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-wdp6j" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:04.092071 containerd[1460]: time="2025-09-05T00:11:04.091626002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:04.092071 containerd[1460]: time="2025-09-05T00:11:04.091770854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:04.092071 containerd[1460]: time="2025-09-05T00:11:04.091784800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:04.092071 containerd[1460]: time="2025-09-05T00:11:04.091914503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:04.111569 systemd[1]: Started cri-containerd-466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46.scope - libcontainer container 466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46. Sep 5 00:11:04.123318 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:04.149291 containerd[1460]: time="2025-09-05T00:11:04.149169439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f98cb9dd-wdp6j,Uid:3931ff65-c111-474a-bf9a-aefaf26362d5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46\"" Sep 5 00:11:04.151788 containerd[1460]: time="2025-09-05T00:11:04.151715296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:11:04.275695 containerd[1460]: time="2025-09-05T00:11:04.275644329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f4fc74d67-xp58z,Uid:551351c8-20f5-49ed-adf6-5a0fdefea7e4,Namespace:calico-system,Attempt:0,}" Sep 5 00:11:04.564869 containerd[1460]: time="2025-09-05T00:11:04.564823595Z" level=info msg="StopPodSandbox for \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\"" Sep 5 00:11:04.613701 systemd-networkd[1389]: califfbe89b98b9: Link UP Sep 5 00:11:04.614041 systemd-networkd[1389]: califfbe89b98b9: Gained carrier Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.309 [INFO][4015] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.319 [INFO][4015] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7f4fc74d67--xp58z-eth0 whisker-7f4fc74d67- calico-system 551351c8-20f5-49ed-adf6-5a0fdefea7e4 1060 0 2025-09-05 00:11:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f4fc74d67 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7f4fc74d67-xp58z eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califfbe89b98b9 [] [] }} ContainerID="a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" Namespace="calico-system" Pod="whisker-7f4fc74d67-xp58z" WorkloadEndpoint="localhost-k8s-whisker--7f4fc74d67--xp58z-" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.319 [INFO][4015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" Namespace="calico-system" Pod="whisker-7f4fc74d67-xp58z" WorkloadEndpoint="localhost-k8s-whisker--7f4fc74d67--xp58z-eth0" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.344 [INFO][4030] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" HandleID="k8s-pod-network.a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" Workload="localhost-k8s-whisker--7f4fc74d67--xp58z-eth0" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.344 [INFO][4030] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" HandleID="k8s-pod-network.a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" Workload="localhost-k8s-whisker--7f4fc74d67--xp58z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7f4fc74d67-xp58z", "timestamp":"2025-09-05 00:11:04.344208703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.344 [INFO][4030] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.344 [INFO][4030] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.344 [INFO][4030] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.351 [INFO][4030] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" host="localhost" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.356 [INFO][4030] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.359 [INFO][4030] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.360 [INFO][4030] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.362 [INFO][4030] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.362 [INFO][4030] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" host="localhost" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.364 [INFO][4030] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535 Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.592 [INFO][4030] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" host="localhost" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.598 [INFO][4030] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" host="localhost" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.598 [INFO][4030] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" host="localhost" Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.598 [INFO][4030] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:04.633461 containerd[1460]: 2025-09-05 00:11:04.598 [INFO][4030] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" HandleID="k8s-pod-network.a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" Workload="localhost-k8s-whisker--7f4fc74d67--xp58z-eth0" Sep 5 00:11:04.634088 containerd[1460]: 2025-09-05 00:11:04.603 [INFO][4015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" Namespace="calico-system" Pod="whisker-7f4fc74d67-xp58z" WorkloadEndpoint="localhost-k8s-whisker--7f4fc74d67--xp58z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f4fc74d67--xp58z-eth0", GenerateName:"whisker-7f4fc74d67-", Namespace:"calico-system", SelfLink:"", UID:"551351c8-20f5-49ed-adf6-5a0fdefea7e4", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f4fc74d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7f4fc74d67-xp58z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califfbe89b98b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:04.634088 containerd[1460]: 2025-09-05 00:11:04.603 [INFO][4015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" Namespace="calico-system" Pod="whisker-7f4fc74d67-xp58z" WorkloadEndpoint="localhost-k8s-whisker--7f4fc74d67--xp58z-eth0" Sep 5 00:11:04.634088 containerd[1460]: 2025-09-05 00:11:04.603 [INFO][4015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfbe89b98b9 ContainerID="a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" Namespace="calico-system" Pod="whisker-7f4fc74d67-xp58z" WorkloadEndpoint="localhost-k8s-whisker--7f4fc74d67--xp58z-eth0" Sep 5 00:11:04.634088 containerd[1460]: 2025-09-05 00:11:04.614 [INFO][4015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" Namespace="calico-system" Pod="whisker-7f4fc74d67-xp58z" WorkloadEndpoint="localhost-k8s-whisker--7f4fc74d67--xp58z-eth0" Sep 5 00:11:04.634088 containerd[1460]: 2025-09-05 00:11:04.615 [INFO][4015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" Namespace="calico-system" Pod="whisker-7f4fc74d67-xp58z" WorkloadEndpoint="localhost-k8s-whisker--7f4fc74d67--xp58z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7f4fc74d67--xp58z-eth0", GenerateName:"whisker-7f4fc74d67-", Namespace:"calico-system", SelfLink:"", UID:"551351c8-20f5-49ed-adf6-5a0fdefea7e4", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 11, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f4fc74d67", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535", Pod:"whisker-7f4fc74d67-xp58z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califfbe89b98b9", MAC:"62:92:24:fc:e2:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:04.634088 containerd[1460]: 2025-09-05 00:11:04.627 [INFO][4015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535" Namespace="calico-system" Pod="whisker-7f4fc74d67-xp58z" WorkloadEndpoint="localhost-k8s-whisker--7f4fc74d67--xp58z-eth0" Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.616 [INFO][4049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.616 [INFO][4049] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" iface="eth0" netns="/var/run/netns/cni-11c64426-0093-17ec-39a8-30918218e4ef" Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.617 [INFO][4049] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" iface="eth0" netns="/var/run/netns/cni-11c64426-0093-17ec-39a8-30918218e4ef" Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.618 [INFO][4049] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" iface="eth0" netns="/var/run/netns/cni-11c64426-0093-17ec-39a8-30918218e4ef" Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.618 [INFO][4049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.618 [INFO][4049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.648 [INFO][4078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" HandleID="k8s-pod-network.d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.648 [INFO][4078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.648 [INFO][4078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.655 [WARNING][4078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" HandleID="k8s-pod-network.d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.655 [INFO][4078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" HandleID="k8s-pod-network.d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.656 [INFO][4078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:04.671599 containerd[1460]: 2025-09-05 00:11:04.665 [INFO][4049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:04.676820 containerd[1460]: time="2025-09-05T00:11:04.676031831Z" level=info msg="TearDown network for sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\" successfully" Sep 5 00:11:04.676820 containerd[1460]: time="2025-09-05T00:11:04.676510789Z" level=info msg="StopPodSandbox for \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\" returns successfully" Sep 5 00:11:04.677214 systemd[1]: run-netns-cni\x2d11c64426\x2d0093\x2d17ec\x2d39a8\x2d30918218e4ef.mount: Deactivated successfully. Sep 5 00:11:04.679109 containerd[1460]: time="2025-09-05T00:11:04.678851410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f98cb9dd-dggrz,Uid:6dd17f3b-8a0c-44c8-8301-94b73eeeab5f,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:11:04.691999 containerd[1460]: time="2025-09-05T00:11:04.691485726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:04.691999 containerd[1460]: time="2025-09-05T00:11:04.691629426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:04.691999 containerd[1460]: time="2025-09-05T00:11:04.691667568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:04.691999 containerd[1460]: time="2025-09-05T00:11:04.691837015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:04.745763 systemd[1]: Started cri-containerd-a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535.scope - libcontainer container a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535. Sep 5 00:11:04.793577 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:04.837045 containerd[1460]: time="2025-09-05T00:11:04.835659344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f4fc74d67-xp58z,Uid:551351c8-20f5-49ed-adf6-5a0fdefea7e4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535\"" Sep 5 00:11:05.153491 kernel: bpftool[4270]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 00:11:05.218384 systemd-networkd[1389]: cali2a91e43ea7d: Link UP Sep 5 00:11:05.219703 systemd-networkd[1389]: cali2a91e43ea7d: Gained carrier Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:04.776 [INFO][4148] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:04.787 [INFO][4148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0 calico-apiserver-56f98cb9dd- calico-apiserver 6dd17f3b-8a0c-44c8-8301-94b73eeeab5f 1072 0 2025-09-05 00:10:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56f98cb9dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56f98cb9dd-dggrz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2a91e43ea7d [] [] }} ContainerID="60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-dggrz" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:04.787 [INFO][4148] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-dggrz" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:04.856 [INFO][4198] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" HandleID="k8s-pod-network.60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:04.856 [INFO][4198] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" HandleID="k8s-pod-network.60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043d9d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56f98cb9dd-dggrz", "timestamp":"2025-09-05 00:11:04.856184394 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:04.856 [INFO][4198] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:04.856 [INFO][4198] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:04.856 [INFO][4198] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.083 [INFO][4198] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" host="localhost" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.108 [INFO][4198] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.113 [INFO][4198] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.115 [INFO][4198] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.117 [INFO][4198] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.117 [INFO][4198] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" host="localhost" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.118 [INFO][4198] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045 Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.130 [INFO][4198] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" host="localhost" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.208 [INFO][4198] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" host="localhost" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.208 [INFO][4198] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" host="localhost" Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.208 [INFO][4198] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:05.253392 containerd[1460]: 2025-09-05 00:11:05.208 [INFO][4198] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" HandleID="k8s-pod-network.60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:05.254077 containerd[1460]: 2025-09-05 00:11:05.213 [INFO][4148] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-dggrz" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0", GenerateName:"calico-apiserver-56f98cb9dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dd17f3b-8a0c-44c8-8301-94b73eeeab5f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f98cb9dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56f98cb9dd-dggrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a91e43ea7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:05.254077 containerd[1460]: 2025-09-05 00:11:05.213 [INFO][4148] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-dggrz" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:05.254077 containerd[1460]: 2025-09-05 00:11:05.213 [INFO][4148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a91e43ea7d ContainerID="60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-dggrz" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:05.254077 containerd[1460]: 2025-09-05 00:11:05.221 [INFO][4148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-dggrz" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:05.254077 containerd[1460]: 2025-09-05 00:11:05.237 [INFO][4148] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-dggrz" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0", GenerateName:"calico-apiserver-56f98cb9dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dd17f3b-8a0c-44c8-8301-94b73eeeab5f", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f98cb9dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045", Pod:"calico-apiserver-56f98cb9dd-dggrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a91e43ea7d", MAC:"ce:1d:66:65:82:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:05.254077 containerd[1460]: 2025-09-05 00:11:05.249 [INFO][4148] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045" Namespace="calico-apiserver" Pod="calico-apiserver-56f98cb9dd-dggrz" WorkloadEndpoint="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:05.274135 containerd[1460]: time="2025-09-05T00:11:05.273211103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:05.274135 containerd[1460]: time="2025-09-05T00:11:05.273938288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:05.274135 containerd[1460]: time="2025-09-05T00:11:05.273952074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:05.274135 containerd[1460]: time="2025-09-05T00:11:05.274043014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:05.298610 systemd[1]: Started cri-containerd-60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045.scope - libcontainer container 60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045. Sep 5 00:11:05.320574 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:05.352703 systemd-networkd[1389]: calib2ac6dfad8c: Gained IPv6LL Sep 5 00:11:05.362531 containerd[1460]: time="2025-09-05T00:11:05.362418152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56f98cb9dd-dggrz,Uid:6dd17f3b-8a0c-44c8-8301-94b73eeeab5f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045\"" Sep 5 00:11:05.456792 systemd-networkd[1389]: vxlan.calico: Link UP Sep 5 00:11:05.456802 systemd-networkd[1389]: vxlan.calico: Gained carrier Sep 5 00:11:05.565486 containerd[1460]: time="2025-09-05T00:11:05.564814325Z" level=info msg="StopPodSandbox for \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\"" Sep 5 00:11:05.565486 containerd[1460]: time="2025-09-05T00:11:05.564923169Z" level=info msg="StopPodSandbox for \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\"" Sep 5 00:11:05.569809 kubelet[2509]: I0905 00:11:05.569768 2509 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb54d995-57fe-449c-b086-a027be7852e5" path="/var/lib/kubelet/pods/cb54d995-57fe-449c-b086-a027be7852e5/volumes" Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.705 [INFO][4394] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.705 [INFO][4394] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" iface="eth0" netns="/var/run/netns/cni-d70d376b-4907-7b2c-b750-bf22df25ec48" Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.706 [INFO][4394] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" iface="eth0" netns="/var/run/netns/cni-d70d376b-4907-7b2c-b750-bf22df25ec48" Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.706 [INFO][4394] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" iface="eth0" netns="/var/run/netns/cni-d70d376b-4907-7b2c-b750-bf22df25ec48" Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.706 [INFO][4394] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.706 [INFO][4394] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.794 [INFO][4433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" HandleID="k8s-pod-network.09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.794 [INFO][4433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.794 [INFO][4433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.865 [WARNING][4433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" HandleID="k8s-pod-network.09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.865 [INFO][4433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" HandleID="k8s-pod-network.09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.867 [INFO][4433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:05.875747 containerd[1460]: 2025-09-05 00:11:05.871 [INFO][4394] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:05.878700 containerd[1460]: time="2025-09-05T00:11:05.878576320Z" level=info msg="TearDown network for sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\" successfully" Sep 5 00:11:05.878700 containerd[1460]: time="2025-09-05T00:11:05.878611506Z" level=info msg="StopPodSandbox for \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\" returns successfully" Sep 5 00:11:05.879234 systemd[1]: run-netns-cni\x2dd70d376b\x2d4907\x2d7b2c\x2db750\x2dbf22df25ec48.mount: Deactivated successfully. Sep 5 00:11:05.879628 containerd[1460]: time="2025-09-05T00:11:05.879531351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb59d4ff4-m7trx,Uid:44b99312-546c-47bf-b6a7-75de1f36f388,Namespace:calico-system,Attempt:1,}" Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.709 [INFO][4393] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.711 [INFO][4393] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" iface="eth0" netns="/var/run/netns/cni-658a1666-552d-0ad3-579d-9f8da5a3a732" Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.711 [INFO][4393] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" iface="eth0" netns="/var/run/netns/cni-658a1666-552d-0ad3-579d-9f8da5a3a732" Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.715 [INFO][4393] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" iface="eth0" netns="/var/run/netns/cni-658a1666-552d-0ad3-579d-9f8da5a3a732" Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.715 [INFO][4393] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.715 [INFO][4393] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.796 [INFO][4439] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" HandleID="k8s-pod-network.3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.798 [INFO][4439] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.867 [INFO][4439] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.878 [WARNING][4439] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" HandleID="k8s-pod-network.3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.878 [INFO][4439] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" HandleID="k8s-pod-network.3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.881 [INFO][4439] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:05.890036 containerd[1460]: 2025-09-05 00:11:05.886 [INFO][4393] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:05.890724 containerd[1460]: time="2025-09-05T00:11:05.890569441Z" level=info msg="TearDown network for sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\" successfully" Sep 5 00:11:05.890724 containerd[1460]: time="2025-09-05T00:11:05.890604096Z" level=info msg="StopPodSandbox for \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\" returns successfully" Sep 5 00:11:05.892585 containerd[1460]: time="2025-09-05T00:11:05.892413170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-477dw,Uid:b497f3e8-1406-4da9-8e71-b2f813307a42,Namespace:calico-system,Attempt:1,}" Sep 5 00:11:05.893727 systemd[1]: run-netns-cni\x2d658a1666\x2d552d\x2d0ad3\x2d579d\x2d9f8da5a3a732.mount: Deactivated successfully. Sep 5 00:11:06.046556 systemd-networkd[1389]: calia3f67cbff20: Link UP Sep 5 00:11:06.046760 systemd-networkd[1389]: calia3f67cbff20: Gained carrier Sep 5 00:11:06.058156 systemd-networkd[1389]: califfbe89b98b9: Gained IPv6LL Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.947 [INFO][4501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--477dw-eth0 goldmane-54d579b49d- calico-system b497f3e8-1406-4da9-8e71-b2f813307a42 1091 0 2025-09-05 00:10:29 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-477dw eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia3f67cbff20 [] [] }} ContainerID="e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" Namespace="calico-system" Pod="goldmane-54d579b49d-477dw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--477dw-" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.947 [INFO][4501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" Namespace="calico-system" Pod="goldmane-54d579b49d-477dw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.975 [INFO][4524] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" HandleID="k8s-pod-network.e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.976 [INFO][4524] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" HandleID="k8s-pod-network.e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ed0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-477dw", "timestamp":"2025-09-05 00:11:05.975812442 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.976 [INFO][4524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.976 [INFO][4524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.976 [INFO][4524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.982 [INFO][4524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" host="localhost" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.989 [INFO][4524] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.993 [INFO][4524] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.995 [INFO][4524] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.996 [INFO][4524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.996 [INFO][4524] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" host="localhost" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:05.998 [INFO][4524] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777 Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:06.033 [INFO][4524] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" host="localhost" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:06.038 [INFO][4524] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" host="localhost" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:06.039 [INFO][4524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" host="localhost" Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:06.039 [INFO][4524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:06.065381 containerd[1460]: 2025-09-05 00:11:06.039 [INFO][4524] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" HandleID="k8s-pod-network.e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:06.066958 containerd[1460]: 2025-09-05 00:11:06.043 [INFO][4501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" Namespace="calico-system" Pod="goldmane-54d579b49d-477dw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--477dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--477dw-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b497f3e8-1406-4da9-8e71-b2f813307a42", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-477dw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia3f67cbff20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:06.066958 containerd[1460]: 2025-09-05 00:11:06.043 [INFO][4501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" Namespace="calico-system" Pod="goldmane-54d579b49d-477dw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:06.066958 containerd[1460]: 2025-09-05 00:11:06.043 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3f67cbff20 ContainerID="e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" Namespace="calico-system" Pod="goldmane-54d579b49d-477dw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:06.066958 containerd[1460]: 2025-09-05 00:11:06.046 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" Namespace="calico-system" Pod="goldmane-54d579b49d-477dw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:06.066958 containerd[1460]: 2025-09-05 00:11:06.046 [INFO][4501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" Namespace="calico-system" Pod="goldmane-54d579b49d-477dw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--477dw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--477dw-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b497f3e8-1406-4da9-8e71-b2f813307a42", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777", Pod:"goldmane-54d579b49d-477dw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia3f67cbff20", MAC:"6e:e1:de:02:6e:22", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:06.066958 containerd[1460]: 2025-09-05 00:11:06.057 [INFO][4501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777" Namespace="calico-system" Pod="goldmane-54d579b49d-477dw" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:06.087128 containerd[1460]: time="2025-09-05T00:11:06.086350416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:06.087128 containerd[1460]: time="2025-09-05T00:11:06.086409146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:06.087128 containerd[1460]: time="2025-09-05T00:11:06.086420116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:06.087128 containerd[1460]: time="2025-09-05T00:11:06.086525654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:06.109661 systemd[1]: Started cri-containerd-e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777.scope - libcontainer container e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777. Sep 5 00:11:06.117618 systemd-networkd[1389]: cali0776b1d81a4: Link UP Sep 5 00:11:06.118990 systemd-networkd[1389]: cali0776b1d81a4: Gained carrier Sep 5 00:11:06.127062 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:05.941 [INFO][4490] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0 calico-kube-controllers-5fb59d4ff4- calico-system 44b99312-546c-47bf-b6a7-75de1f36f388 1090 0 2025-09-05 00:10:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fb59d4ff4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5fb59d4ff4-m7trx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0776b1d81a4 [] [] }} ContainerID="74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" Namespace="calico-system" Pod="calico-kube-controllers-5fb59d4ff4-m7trx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:05.942 [INFO][4490] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" Namespace="calico-system" Pod="calico-kube-controllers-5fb59d4ff4-m7trx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:05.978 [INFO][4518] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" HandleID="k8s-pod-network.74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:05.978 [INFO][4518] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" HandleID="k8s-pod-network.74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7070), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5fb59d4ff4-m7trx", "timestamp":"2025-09-05 00:11:05.97810817 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:05.978 [INFO][4518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.039 [INFO][4518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.039 [INFO][4518] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.083 [INFO][4518] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" host="localhost" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.090 [INFO][4518] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.094 [INFO][4518] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.096 [INFO][4518] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.098 [INFO][4518] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.098 [INFO][4518] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" host="localhost" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.099 [INFO][4518] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7 Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.103 [INFO][4518] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" host="localhost" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.108 [INFO][4518] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" host="localhost" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.108 [INFO][4518] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" host="localhost" Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.109 [INFO][4518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:06.137014 containerd[1460]: 2025-09-05 00:11:06.109 [INFO][4518] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" HandleID="k8s-pod-network.74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:06.137891 containerd[1460]: 2025-09-05 00:11:06.113 [INFO][4490] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" Namespace="calico-system" Pod="calico-kube-controllers-5fb59d4ff4-m7trx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0", GenerateName:"calico-kube-controllers-5fb59d4ff4-", Namespace:"calico-system", SelfLink:"", UID:"44b99312-546c-47bf-b6a7-75de1f36f388", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb59d4ff4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5fb59d4ff4-m7trx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0776b1d81a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:06.137891 containerd[1460]: 2025-09-05 00:11:06.113 [INFO][4490] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" Namespace="calico-system" Pod="calico-kube-controllers-5fb59d4ff4-m7trx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:06.137891 containerd[1460]: 2025-09-05 00:11:06.113 [INFO][4490] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0776b1d81a4 ContainerID="74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" Namespace="calico-system" Pod="calico-kube-controllers-5fb59d4ff4-m7trx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:06.137891 containerd[1460]: 2025-09-05 00:11:06.119 [INFO][4490] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" Namespace="calico-system" Pod="calico-kube-controllers-5fb59d4ff4-m7trx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:06.137891 containerd[1460]: 2025-09-05 00:11:06.120 [INFO][4490] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" Namespace="calico-system" Pod="calico-kube-controllers-5fb59d4ff4-m7trx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0", GenerateName:"calico-kube-controllers-5fb59d4ff4-", Namespace:"calico-system", SelfLink:"", UID:"44b99312-546c-47bf-b6a7-75de1f36f388", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb59d4ff4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7", Pod:"calico-kube-controllers-5fb59d4ff4-m7trx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0776b1d81a4", MAC:"02:e8:88:5d:48:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:06.137891 containerd[1460]: 2025-09-05 00:11:06.132 [INFO][4490] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7" Namespace="calico-system" Pod="calico-kube-controllers-5fb59d4ff4-m7trx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:06.187951 containerd[1460]: time="2025-09-05T00:11:06.187390878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:06.187951 containerd[1460]: time="2025-09-05T00:11:06.187510463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:06.187951 containerd[1460]: time="2025-09-05T00:11:06.187522276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:06.187951 containerd[1460]: time="2025-09-05T00:11:06.187838559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:06.191805 containerd[1460]: time="2025-09-05T00:11:06.191746821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-477dw,Uid:b497f3e8-1406-4da9-8e71-b2f813307a42,Namespace:calico-system,Attempt:1,} returns sandbox id \"e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777\"" Sep 5 00:11:06.219990 systemd[1]: Started cri-containerd-74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7.scope - libcontainer container 74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7. Sep 5 00:11:06.240499 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:06.270787 containerd[1460]: time="2025-09-05T00:11:06.270740902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb59d4ff4-m7trx,Uid:44b99312-546c-47bf-b6a7-75de1f36f388,Namespace:calico-system,Attempt:1,} returns sandbox id \"74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7\"" Sep 5 00:11:06.564713 containerd[1460]: time="2025-09-05T00:11:06.564641923Z" level=info msg="StopPodSandbox for \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\"" Sep 5 00:11:06.568770 systemd-networkd[1389]: cali2a91e43ea7d: Gained IPv6LL Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.634 [INFO][4648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.635 [INFO][4648] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" iface="eth0" netns="/var/run/netns/cni-cded3ddb-6338-f7ab-07af-5e10514b2e34" Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.635 [INFO][4648] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" iface="eth0" netns="/var/run/netns/cni-cded3ddb-6338-f7ab-07af-5e10514b2e34" Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.635 [INFO][4648] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" iface="eth0" netns="/var/run/netns/cni-cded3ddb-6338-f7ab-07af-5e10514b2e34" Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.635 [INFO][4648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.635 [INFO][4648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.668 [INFO][4656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" HandleID="k8s-pod-network.16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.668 [INFO][4656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.668 [INFO][4656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.673 [WARNING][4656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" HandleID="k8s-pod-network.16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.674 [INFO][4656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" HandleID="k8s-pod-network.16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.676 [INFO][4656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:06.683313 containerd[1460]: 2025-09-05 00:11:06.679 [INFO][4648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:06.684133 containerd[1460]: time="2025-09-05T00:11:06.683572119Z" level=info msg="TearDown network for sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\" successfully" Sep 5 00:11:06.684133 containerd[1460]: time="2025-09-05T00:11:06.683613436Z" level=info msg="StopPodSandbox for \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\" returns successfully" Sep 5 00:11:06.684202 kubelet[2509]: E0905 00:11:06.684074 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:06.685470 containerd[1460]: time="2025-09-05T00:11:06.685416319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fmz4m,Uid:1fa28034-a693-41d8-9eae-06ce071ba306,Namespace:kube-system,Attempt:1,}" Sep 5 00:11:06.686584 systemd[1]: run-netns-cni\x2dcded3ddb\x2d6338\x2df7ab\x2d07af\x2d5e10514b2e34.mount: Deactivated successfully. Sep 5 00:11:06.760711 systemd-networkd[1389]: vxlan.calico: Gained IPv6LL Sep 5 00:11:07.034992 systemd-networkd[1389]: cali57521ff7f58: Link UP Sep 5 00:11:07.035234 systemd-networkd[1389]: cali57521ff7f58: Gained carrier Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:06.964 [INFO][4670] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0 coredns-674b8bbfcf- kube-system 1fa28034-a693-41d8-9eae-06ce071ba306 1105 0 2025-09-05 00:10:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fmz4m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali57521ff7f58 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-fmz4m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fmz4m-" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:06.965 [INFO][4670] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-fmz4m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:06.993 [INFO][4686] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" HandleID="k8s-pod-network.9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:06.993 [INFO][4686] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" HandleID="k8s-pod-network.9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138620), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fmz4m", "timestamp":"2025-09-05 00:11:06.993563226 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:06.993 [INFO][4686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:06.994 [INFO][4686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:06.994 [INFO][4686] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.002 [INFO][4686] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" host="localhost" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.005 [INFO][4686] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.009 [INFO][4686] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.011 [INFO][4686] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.012 [INFO][4686] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.012 [INFO][4686] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" host="localhost" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.013 [INFO][4686] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93 Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.020 [INFO][4686] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" host="localhost" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.027 [INFO][4686] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" host="localhost" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.027 [INFO][4686] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" host="localhost" Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.027 [INFO][4686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:07.050572 containerd[1460]: 2025-09-05 00:11:07.027 [INFO][4686] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" HandleID="k8s-pod-network.9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:07.051727 containerd[1460]: 2025-09-05 00:11:07.032 [INFO][4670] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-fmz4m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1fa28034-a693-41d8-9eae-06ce071ba306", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fmz4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57521ff7f58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:07.051727 containerd[1460]: 2025-09-05 00:11:07.032 [INFO][4670] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-fmz4m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:07.051727 containerd[1460]: 2025-09-05 00:11:07.032 [INFO][4670] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57521ff7f58 ContainerID="9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-fmz4m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:07.051727 containerd[1460]: 2025-09-05 00:11:07.034 [INFO][4670] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-fmz4m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:07.051727 containerd[1460]: 2025-09-05 00:11:07.035 [INFO][4670] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-fmz4m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1fa28034-a693-41d8-9eae-06ce071ba306", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93", Pod:"coredns-674b8bbfcf-fmz4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57521ff7f58", MAC:"ae:46:c4:c4:30:07", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:07.051727 containerd[1460]: 2025-09-05 00:11:07.046 [INFO][4670] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93" Namespace="kube-system" Pod="coredns-674b8bbfcf-fmz4m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:07.078928 containerd[1460]: time="2025-09-05T00:11:07.078783664Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:07.078928 containerd[1460]: time="2025-09-05T00:11:07.078891967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:07.078928 containerd[1460]: time="2025-09-05T00:11:07.078909680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:07.079139 containerd[1460]: time="2025-09-05T00:11:07.079024004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:07.105659 systemd[1]: Started cri-containerd-9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93.scope - libcontainer container 9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93. Sep 5 00:11:07.122064 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:07.152371 containerd[1460]: time="2025-09-05T00:11:07.152329758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fmz4m,Uid:1fa28034-a693-41d8-9eae-06ce071ba306,Namespace:kube-system,Attempt:1,} returns sandbox id \"9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93\"" Sep 5 00:11:07.153513 kubelet[2509]: E0905 00:11:07.153472 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:07.160066 containerd[1460]: time="2025-09-05T00:11:07.159738728Z" level=info msg="CreateContainer within sandbox \"9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:11:07.178489 containerd[1460]: time="2025-09-05T00:11:07.178441826Z" level=info msg="CreateContainer within sandbox \"9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c899885cadf1bb83064b291da1b86c5eb8797ea55859b5b3d8ed78ea0df11995\"" Sep 5 00:11:07.179182 containerd[1460]: time="2025-09-05T00:11:07.179146588Z" level=info msg="StartContainer for \"c899885cadf1bb83064b291da1b86c5eb8797ea55859b5b3d8ed78ea0df11995\"" Sep 5 00:11:07.209625 systemd[1]: Started cri-containerd-c899885cadf1bb83064b291da1b86c5eb8797ea55859b5b3d8ed78ea0df11995.scope - libcontainer container c899885cadf1bb83064b291da1b86c5eb8797ea55859b5b3d8ed78ea0df11995. Sep 5 00:11:07.250665 containerd[1460]: time="2025-09-05T00:11:07.250517553Z" level=info msg="StartContainer for \"c899885cadf1bb83064b291da1b86c5eb8797ea55859b5b3d8ed78ea0df11995\" returns successfully" Sep 5 00:11:07.543957 systemd[1]: Started sshd@10-10.0.0.52:22-10.0.0.1:37812.service - OpenSSH per-connection server daemon (10.0.0.1:37812). Sep 5 00:11:07.566521 containerd[1460]: time="2025-09-05T00:11:07.566483688Z" level=info msg="StopPodSandbox for \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\"" Sep 5 00:11:07.601619 kubelet[2509]: E0905 00:11:07.601249 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:07.651512 kubelet[2509]: I0905 00:11:07.647483 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fmz4m" podStartSLOduration=49.64745885 podStartE2EDuration="49.64745885s" podCreationTimestamp="2025-09-05 00:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:11:07.646759269 +0000 UTC m=+56.180838847" watchObservedRunningTime="2025-09-05 00:11:07.64745885 +0000 UTC m=+56.181538428" Sep 5 00:11:07.688452 containerd[1460]: time="2025-09-05T00:11:07.687966565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:07.692273 containerd[1460]: time="2025-09-05T00:11:07.692172966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 5 00:11:07.693828 sshd[4789]: Accepted publickey for core from 10.0.0.1 port 37812 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:07.695800 containerd[1460]: time="2025-09-05T00:11:07.695713819Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:07.699864 containerd[1460]: time="2025-09-05T00:11:07.699511824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:07.700062 sshd[4789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:07.702036 containerd[1460]: time="2025-09-05T00:11:07.701917657Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.550126209s" Sep 5 00:11:07.702036 containerd[1460]: time="2025-09-05T00:11:07.701973562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:11:07.704757 containerd[1460]: time="2025-09-05T00:11:07.704528025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:11:07.715112 systemd-logind[1443]: New session 11 of user core. Sep 5 00:11:07.721006 systemd-networkd[1389]: calia3f67cbff20: Gained IPv6LL Sep 5 00:11:07.726398 containerd[1460]: time="2025-09-05T00:11:07.716899344Z" level=info msg="CreateContainer within sandbox \"466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:11:07.724851 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.649 [INFO][4801] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.670 [INFO][4801] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" iface="eth0" netns="/var/run/netns/cni-902ba23a-93fa-1b56-f7a9-87d384542530" Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.671 [INFO][4801] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" iface="eth0" netns="/var/run/netns/cni-902ba23a-93fa-1b56-f7a9-87d384542530" Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.672 [INFO][4801] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" iface="eth0" netns="/var/run/netns/cni-902ba23a-93fa-1b56-f7a9-87d384542530" Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.673 [INFO][4801] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.673 [INFO][4801] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.729 [INFO][4816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" HandleID="k8s-pod-network.7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.729 [INFO][4816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.729 [INFO][4816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.736 [WARNING][4816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" HandleID="k8s-pod-network.7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.736 [INFO][4816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" HandleID="k8s-pod-network.7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.738 [INFO][4816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:07.747583 containerd[1460]: 2025-09-05 00:11:07.741 [INFO][4801] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:07.752749 containerd[1460]: time="2025-09-05T00:11:07.752700027Z" level=info msg="TearDown network for sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\" successfully" Sep 5 00:11:07.752749 containerd[1460]: time="2025-09-05T00:11:07.752740984Z" level=info msg="StopPodSandbox for \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\" returns successfully" Sep 5 00:11:07.755729 kubelet[2509]: E0905 00:11:07.753177 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:07.756125 containerd[1460]: time="2025-09-05T00:11:07.755011213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fnhh4,Uid:c644f92f-3ab8-4f91-9628-5d21f3b334b2,Namespace:kube-system,Attempt:1,}" Sep 5 00:11:07.754830 systemd[1]: run-netns-cni\x2d902ba23a\x2d93fa\x2d1b56\x2df7a9\x2d87d384542530.mount: Deactivated successfully. Sep 5 00:11:08.168665 systemd-networkd[1389]: cali0776b1d81a4: Gained IPv6LL Sep 5 00:11:08.255465 sshd[4789]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:08.261821 containerd[1460]: time="2025-09-05T00:11:08.261776438Z" level=info msg="CreateContainer within sandbox \"466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"33583eb384df285c3330ea0c37aec2562214766a187897bb9c30d381117c04ef\"" Sep 5 00:11:08.263855 containerd[1460]: time="2025-09-05T00:11:08.262648645Z" level=info msg="StartContainer for \"33583eb384df285c3330ea0c37aec2562214766a187897bb9c30d381117c04ef\"" Sep 5 00:11:08.265608 systemd[1]: sshd@10-10.0.0.52:22-10.0.0.1:37812.service: Deactivated successfully. Sep 5 00:11:08.286558 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:11:08.291257 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:11:08.296837 systemd-networkd[1389]: cali57521ff7f58: Gained IPv6LL Sep 5 00:11:08.299820 systemd[1]: Started sshd@11-10.0.0.52:22-10.0.0.1:37820.service - OpenSSH per-connection server daemon (10.0.0.1:37820). Sep 5 00:11:08.306120 systemd[1]: Started cri-containerd-33583eb384df285c3330ea0c37aec2562214766a187897bb9c30d381117c04ef.scope - libcontainer container 33583eb384df285c3330ea0c37aec2562214766a187897bb9c30d381117c04ef. Sep 5 00:11:08.307522 systemd-logind[1443]: Removed session 11. Sep 5 00:11:08.330131 sshd[4870]: Accepted publickey for core from 10.0.0.1 port 37820 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:08.332011 sshd[4870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:08.336126 systemd-logind[1443]: New session 12 of user core. Sep 5 00:11:08.345911 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:11:08.471076 containerd[1460]: time="2025-09-05T00:11:08.470885602Z" level=info msg="StartContainer for \"33583eb384df285c3330ea0c37aec2562214766a187897bb9c30d381117c04ef\" returns successfully" Sep 5 00:11:08.530888 systemd-networkd[1389]: cali7a6d448079a: Link UP Sep 5 00:11:08.538110 systemd-networkd[1389]: cali7a6d448079a: Gained carrier Sep 5 00:11:08.566906 containerd[1460]: time="2025-09-05T00:11:08.566848095Z" level=info msg="StopPodSandbox for \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\"" Sep 5 00:11:08.607324 kubelet[2509]: E0905 00:11:08.607270 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.400 [INFO][4849] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0 coredns-674b8bbfcf- kube-system c644f92f-3ab8-4f91-9628-5d21f3b334b2 1120 0 2025-09-05 00:10:18 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fnhh4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7a6d448079a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" Namespace="kube-system" Pod="coredns-674b8bbfcf-fnhh4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fnhh4-" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.400 [INFO][4849] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" Namespace="kube-system" Pod="coredns-674b8bbfcf-fnhh4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.434 [INFO][4895] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" HandleID="k8s-pod-network.9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.434 [INFO][4895] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" HandleID="k8s-pod-network.9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001347c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fnhh4", "timestamp":"2025-09-05 00:11:08.434295379 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.434 [INFO][4895] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.434 [INFO][4895] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.434 [INFO][4895] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.443 [INFO][4895] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" host="localhost" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.447 [INFO][4895] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.452 [INFO][4895] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.454 [INFO][4895] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.456 [INFO][4895] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.457 [INFO][4895] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" host="localhost" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.458 [INFO][4895] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0 Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.497 [INFO][4895] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" host="localhost" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.520 [INFO][4895] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" host="localhost" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.520 [INFO][4895] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" host="localhost" Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.520 [INFO][4895] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:08.752103 containerd[1460]: 2025-09-05 00:11:08.520 [INFO][4895] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" HandleID="k8s-pod-network.9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:08.752780 containerd[1460]: 2025-09-05 00:11:08.524 [INFO][4849] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" Namespace="kube-system" Pod="coredns-674b8bbfcf-fnhh4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c644f92f-3ab8-4f91-9628-5d21f3b334b2", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fnhh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a6d448079a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:08.752780 containerd[1460]: 2025-09-05 00:11:08.525 [INFO][4849] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" Namespace="kube-system" Pod="coredns-674b8bbfcf-fnhh4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:08.752780 containerd[1460]: 2025-09-05 00:11:08.525 [INFO][4849] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a6d448079a ContainerID="9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" Namespace="kube-system" Pod="coredns-674b8bbfcf-fnhh4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:08.752780 containerd[1460]: 2025-09-05 00:11:08.534 [INFO][4849] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" Namespace="kube-system" Pod="coredns-674b8bbfcf-fnhh4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:08.752780 containerd[1460]: 2025-09-05 00:11:08.541 [INFO][4849] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" Namespace="kube-system" Pod="coredns-674b8bbfcf-fnhh4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c644f92f-3ab8-4f91-9628-5d21f3b334b2", ResourceVersion:"1120", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0", Pod:"coredns-674b8bbfcf-fnhh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a6d448079a", MAC:"46:85:4a:2c:29:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:08.752780 containerd[1460]: 2025-09-05 00:11:08.746 [INFO][4849] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0" Namespace="kube-system" Pod="coredns-674b8bbfcf-fnhh4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:09.061599 containerd[1460]: time="2025-09-05T00:11:09.054138826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:09.061599 containerd[1460]: time="2025-09-05T00:11:09.061315970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:09.061599 containerd[1460]: time="2025-09-05T00:11:09.061334555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:09.061599 containerd[1460]: time="2025-09-05T00:11:09.061496208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:09.091748 kubelet[2509]: I0905 00:11:09.091664 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56f98cb9dd-wdp6j" podStartSLOduration=38.538362834 podStartE2EDuration="42.091640275s" podCreationTimestamp="2025-09-05 00:10:27 +0000 UTC" firstStartedPulling="2025-09-05 00:11:04.151058774 +0000 UTC m=+52.685138352" lastFinishedPulling="2025-09-05 00:11:07.704336215 +0000 UTC m=+56.238415793" observedRunningTime="2025-09-05 00:11:09.091140809 +0000 UTC m=+57.625220407" watchObservedRunningTime="2025-09-05 00:11:09.091640275 +0000 UTC m=+57.625719843" Sep 5 00:11:09.093729 systemd[1]: Started cri-containerd-9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0.scope - libcontainer container 9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0. Sep 5 00:11:09.111186 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:09.143522 containerd[1460]: time="2025-09-05T00:11:09.143457190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fnhh4,Uid:c644f92f-3ab8-4f91-9628-5d21f3b334b2,Namespace:kube-system,Attempt:1,} returns sandbox id \"9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0\"" Sep 5 00:11:09.144818 kubelet[2509]: E0905 00:11:09.144767 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:09.173128 sshd[4870]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:09.181703 systemd[1]: sshd@11-10.0.0.52:22-10.0.0.1:37820.service: Deactivated successfully. Sep 5 00:11:09.184349 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.122 [INFO][4925] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.122 [INFO][4925] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" iface="eth0" netns="/var/run/netns/cni-ad7ad277-8a62-41d1-6c88-593ec1574d08" Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.122 [INFO][4925] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" iface="eth0" netns="/var/run/netns/cni-ad7ad277-8a62-41d1-6c88-593ec1574d08" Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.122 [INFO][4925] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" iface="eth0" netns="/var/run/netns/cni-ad7ad277-8a62-41d1-6c88-593ec1574d08" Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.122 [INFO][4925] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.122 [INFO][4925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.150 [INFO][4976] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" HandleID="k8s-pod-network.28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.150 [INFO][4976] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.150 [INFO][4976] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.168 [WARNING][4976] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" HandleID="k8s-pod-network.28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.168 [INFO][4976] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" HandleID="k8s-pod-network.28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.174 [INFO][4976] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:09.186609 containerd[1460]: 2025-09-05 00:11:09.181 [INFO][4925] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:09.187891 containerd[1460]: time="2025-09-05T00:11:09.186973192Z" level=info msg="TearDown network for sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\" successfully" Sep 5 00:11:09.187891 containerd[1460]: time="2025-09-05T00:11:09.187020722Z" level=info msg="StopPodSandbox for \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\" returns successfully" Sep 5 00:11:09.186817 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:11:09.190559 containerd[1460]: time="2025-09-05T00:11:09.188783608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54j5k,Uid:d4aa0d59-f65d-4bd5-953e-3a3464571ba3,Namespace:calico-system,Attempt:1,}" Sep 5 00:11:09.193629 systemd[1]: run-netns-cni\x2dad7ad277\x2d8a62\x2d41d1\x2d6c88\x2d593ec1574d08.mount: Deactivated successfully. Sep 5 00:11:09.201081 systemd[1]: Started sshd@12-10.0.0.52:22-10.0.0.1:37834.service - OpenSSH per-connection server daemon (10.0.0.1:37834). Sep 5 00:11:09.202343 systemd-logind[1443]: Removed session 12. Sep 5 00:11:09.234513 sshd[4996]: Accepted publickey for core from 10.0.0.1 port 37834 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:09.236274 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:09.241291 systemd-logind[1443]: New session 13 of user core. Sep 5 00:11:09.250562 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:11:09.314466 containerd[1460]: time="2025-09-05T00:11:09.314254200Z" level=info msg="CreateContainer within sandbox \"9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:11:09.381091 containerd[1460]: time="2025-09-05T00:11:09.380731967Z" level=info msg="CreateContainer within sandbox \"9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c31c251a228ac4486f5baf41c361584c0b45714639a7d96b1dceab598c45ae12\"" Sep 5 00:11:09.382259 containerd[1460]: time="2025-09-05T00:11:09.382216772Z" level=info msg="StartContainer for \"c31c251a228ac4486f5baf41c361584c0b45714639a7d96b1dceab598c45ae12\"" Sep 5 00:11:09.427621 sshd[4996]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:09.431725 systemd[1]: Started cri-containerd-c31c251a228ac4486f5baf41c361584c0b45714639a7d96b1dceab598c45ae12.scope - libcontainer container c31c251a228ac4486f5baf41c361584c0b45714639a7d96b1dceab598c45ae12. Sep 5 00:11:09.434758 systemd[1]: sshd@12-10.0.0.52:22-10.0.0.1:37834.service: Deactivated successfully. Sep 5 00:11:09.438456 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:11:09.440402 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:11:09.444271 systemd-logind[1443]: Removed session 13. Sep 5 00:11:09.472901 containerd[1460]: time="2025-09-05T00:11:09.472846466Z" level=info msg="StartContainer for \"c31c251a228ac4486f5baf41c361584c0b45714639a7d96b1dceab598c45ae12\" returns successfully" Sep 5 00:11:09.530164 systemd-networkd[1389]: calibf855033e21: Link UP Sep 5 00:11:09.530413 systemd-networkd[1389]: calibf855033e21: Gained carrier Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.423 [INFO][5017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--54j5k-eth0 csi-node-driver- calico-system d4aa0d59-f65d-4bd5-953e-3a3464571ba3 1141 0 2025-09-05 00:10:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-54j5k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibf855033e21 [] [] }} ContainerID="185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" Namespace="calico-system" Pod="csi-node-driver-54j5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--54j5k-" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.424 [INFO][5017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" Namespace="calico-system" Pod="csi-node-driver-54j5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.474 [INFO][5043] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" HandleID="k8s-pod-network.185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.474 [INFO][5043] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" HandleID="k8s-pod-network.185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011e160), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-54j5k", "timestamp":"2025-09-05 00:11:09.474276478 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.474 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.474 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.475 [INFO][5043] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.481 [INFO][5043] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" host="localhost" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.487 [INFO][5043] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.493 [INFO][5043] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.495 [INFO][5043] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.498 [INFO][5043] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.498 [INFO][5043] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" host="localhost" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.500 [INFO][5043] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.505 [INFO][5043] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" host="localhost" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.521 [INFO][5043] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" host="localhost" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.521 [INFO][5043] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" host="localhost" Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.521 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:09.545704 containerd[1460]: 2025-09-05 00:11:09.521 [INFO][5043] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" HandleID="k8s-pod-network.185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:09.546538 containerd[1460]: 2025-09-05 00:11:09.525 [INFO][5017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" Namespace="calico-system" Pod="csi-node-driver-54j5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--54j5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--54j5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4aa0d59-f65d-4bd5-953e-3a3464571ba3", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-54j5k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf855033e21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:09.546538 containerd[1460]: 2025-09-05 00:11:09.525 [INFO][5017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" Namespace="calico-system" Pod="csi-node-driver-54j5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:09.546538 containerd[1460]: 2025-09-05 00:11:09.525 [INFO][5017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf855033e21 ContainerID="185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" Namespace="calico-system" Pod="csi-node-driver-54j5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:09.546538 containerd[1460]: 2025-09-05 00:11:09.529 [INFO][5017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" Namespace="calico-system" Pod="csi-node-driver-54j5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:09.546538 containerd[1460]: 2025-09-05 00:11:09.529 [INFO][5017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" Namespace="calico-system" Pod="csi-node-driver-54j5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--54j5k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--54j5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4aa0d59-f65d-4bd5-953e-3a3464571ba3", ResourceVersion:"1141", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a", Pod:"csi-node-driver-54j5k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf855033e21", MAC:"22:e6:6a:17:23:f3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:09.546538 containerd[1460]: 2025-09-05 00:11:09.538 [INFO][5017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a" Namespace="calico-system" Pod="csi-node-driver-54j5k" WorkloadEndpoint="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:09.581520 containerd[1460]: time="2025-09-05T00:11:09.580354962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:11:09.581520 containerd[1460]: time="2025-09-05T00:11:09.581167387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:11:09.581520 containerd[1460]: time="2025-09-05T00:11:09.581182866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:09.581520 containerd[1460]: time="2025-09-05T00:11:09.581274578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:11:09.610820 systemd[1]: Started cri-containerd-185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a.scope - libcontainer container 185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a. Sep 5 00:11:09.612837 kubelet[2509]: I0905 00:11:09.612415 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:11:09.613305 kubelet[2509]: E0905 00:11:09.613266 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:09.615457 kubelet[2509]: E0905 00:11:09.615053 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:09.644371 kubelet[2509]: I0905 00:11:09.644302 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fnhh4" podStartSLOduration=51.644279019 podStartE2EDuration="51.644279019s" podCreationTimestamp="2025-09-05 00:10:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:11:09.626402754 +0000 UTC m=+58.160482332" watchObservedRunningTime="2025-09-05 00:11:09.644279019 +0000 UTC m=+58.178358597" Sep 5 00:11:09.645886 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:11:09.668478 containerd[1460]: time="2025-09-05T00:11:09.666848076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-54j5k,Uid:d4aa0d59-f65d-4bd5-953e-3a3464571ba3,Namespace:calico-system,Attempt:1,} returns sandbox id \"185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a\"" Sep 5 00:11:09.960707 systemd-networkd[1389]: cali7a6d448079a: Gained IPv6LL Sep 5 00:11:10.117408 containerd[1460]: time="2025-09-05T00:11:10.117344643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:10.118106 containerd[1460]: time="2025-09-05T00:11:10.118065094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 5 00:11:10.119256 containerd[1460]: time="2025-09-05T00:11:10.119226162Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:10.121475 containerd[1460]: time="2025-09-05T00:11:10.121439554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:10.122156 containerd[1460]: time="2025-09-05T00:11:10.122115502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.417546451s" Sep 5 00:11:10.122156 containerd[1460]: time="2025-09-05T00:11:10.122157381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 5 00:11:10.123303 containerd[1460]: time="2025-09-05T00:11:10.123274306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:11:10.127094 containerd[1460]: time="2025-09-05T00:11:10.127057633Z" level=info msg="CreateContainer within sandbox \"a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:11:10.143799 containerd[1460]: time="2025-09-05T00:11:10.143727292Z" level=info msg="CreateContainer within sandbox \"a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"2dd7b62681d150fb2b84710735cbc3c8c12c7c28c8361b021eef73c2fc4b18cc\"" Sep 5 00:11:10.144331 containerd[1460]: time="2025-09-05T00:11:10.144303043Z" level=info msg="StartContainer for \"2dd7b62681d150fb2b84710735cbc3c8c12c7c28c8361b021eef73c2fc4b18cc\"" Sep 5 00:11:10.181559 systemd[1]: Started cri-containerd-2dd7b62681d150fb2b84710735cbc3c8c12c7c28c8361b021eef73c2fc4b18cc.scope - libcontainer container 2dd7b62681d150fb2b84710735cbc3c8c12c7c28c8361b021eef73c2fc4b18cc. Sep 5 00:11:10.229317 containerd[1460]: time="2025-09-05T00:11:10.229010812Z" level=info msg="StartContainer for \"2dd7b62681d150fb2b84710735cbc3c8c12c7c28c8361b021eef73c2fc4b18cc\" returns successfully" Sep 5 00:11:10.467692 containerd[1460]: time="2025-09-05T00:11:10.467597486Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:10.468447 containerd[1460]: time="2025-09-05T00:11:10.468377490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:11:10.471689 containerd[1460]: time="2025-09-05T00:11:10.471637015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 348.326991ms" Sep 5 00:11:10.471689 containerd[1460]: time="2025-09-05T00:11:10.471688972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:11:10.472907 containerd[1460]: time="2025-09-05T00:11:10.472853927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:11:10.477907 containerd[1460]: time="2025-09-05T00:11:10.477870327Z" level=info msg="CreateContainer within sandbox \"60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:11:10.492655 containerd[1460]: time="2025-09-05T00:11:10.492143962Z" level=info msg="CreateContainer within sandbox \"60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"18d2b508cbf27d3add7ba27e6cad22ee78261439d96aafb4d998c97d9657d9e5\"" Sep 5 00:11:10.492786 containerd[1460]: time="2025-09-05T00:11:10.492749368Z" level=info msg="StartContainer for \"18d2b508cbf27d3add7ba27e6cad22ee78261439d96aafb4d998c97d9657d9e5\"" Sep 5 00:11:10.520617 systemd[1]: Started cri-containerd-18d2b508cbf27d3add7ba27e6cad22ee78261439d96aafb4d998c97d9657d9e5.scope - libcontainer container 18d2b508cbf27d3add7ba27e6cad22ee78261439d96aafb4d998c97d9657d9e5. Sep 5 00:11:10.562966 containerd[1460]: time="2025-09-05T00:11:10.562907194Z" level=info msg="StartContainer for \"18d2b508cbf27d3add7ba27e6cad22ee78261439d96aafb4d998c97d9657d9e5\" returns successfully" Sep 5 00:11:10.617595 kubelet[2509]: E0905 00:11:10.617560 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:10.627220 kubelet[2509]: I0905 00:11:10.626964 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56f98cb9dd-dggrz" podStartSLOduration=38.519993252 podStartE2EDuration="43.626945611s" podCreationTimestamp="2025-09-05 00:10:27 +0000 UTC" firstStartedPulling="2025-09-05 00:11:05.365744173 +0000 UTC m=+53.899823751" lastFinishedPulling="2025-09-05 00:11:10.472696532 +0000 UTC m=+59.006776110" observedRunningTime="2025-09-05 00:11:10.625729279 +0000 UTC m=+59.159808857" watchObservedRunningTime="2025-09-05 00:11:10.626945611 +0000 UTC m=+59.161025189" Sep 5 00:11:10.655683 systemd[1]: run-containerd-runc-k8s.io-2dd7b62681d150fb2b84710735cbc3c8c12c7c28c8361b021eef73c2fc4b18cc-runc.9ZHTbd.mount: Deactivated successfully. Sep 5 00:11:11.240608 systemd-networkd[1389]: calibf855033e21: Gained IPv6LL Sep 5 00:11:11.543608 containerd[1460]: time="2025-09-05T00:11:11.543418398Z" level=info msg="StopPodSandbox for \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\"" Sep 5 00:11:11.619068 kubelet[2509]: I0905 00:11:11.619029 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:11:11.619509 kubelet[2509]: E0905 00:11:11.619396 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.653 [WARNING][5229] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0", GenerateName:"calico-apiserver-56f98cb9dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3931ff65-c111-474a-bf9a-aefaf26362d5", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f98cb9dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46", Pod:"calico-apiserver-56f98cb9dd-wdp6j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2ac6dfad8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.653 [INFO][5229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.653 [INFO][5229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" iface="eth0" netns="" Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.653 [INFO][5229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.653 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.677 [INFO][5240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" HandleID="k8s-pod-network.4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.677 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.677 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.796 [WARNING][5240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" HandleID="k8s-pod-network.4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.797 [INFO][5240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" HandleID="k8s-pod-network.4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.798 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:11.805625 containerd[1460]: 2025-09-05 00:11:11.802 [INFO][5229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:11.805625 containerd[1460]: time="2025-09-05T00:11:11.805537474Z" level=info msg="TearDown network for sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\" successfully" Sep 5 00:11:11.805625 containerd[1460]: time="2025-09-05T00:11:11.805571641Z" level=info msg="StopPodSandbox for \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\" returns successfully" Sep 5 00:11:11.806282 containerd[1460]: time="2025-09-05T00:11:11.806256263Z" level=info msg="RemovePodSandbox for \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\"" Sep 5 00:11:11.809035 containerd[1460]: time="2025-09-05T00:11:11.809002140Z" level=info msg="Forcibly stopping sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\"" Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.884 [WARNING][5263] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0", GenerateName:"calico-apiserver-56f98cb9dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"3931ff65-c111-474a-bf9a-aefaf26362d5", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f98cb9dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"466c971e5484bd4317ead0b49dc1b5935af2f7fa33c5fda3b2d2c4227e1e2b46", Pod:"calico-apiserver-56f98cb9dd-wdp6j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2ac6dfad8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.885 [INFO][5263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.885 [INFO][5263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" iface="eth0" netns="" Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.885 [INFO][5263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.885 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.908 [INFO][5271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" HandleID="k8s-pod-network.4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.908 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.908 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.939 [WARNING][5271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" HandleID="k8s-pod-network.4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.939 [INFO][5271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" HandleID="k8s-pod-network.4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--wdp6j-eth0" Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.942 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:11.950187 containerd[1460]: 2025-09-05 00:11:11.946 [INFO][5263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471" Sep 5 00:11:11.950864 containerd[1460]: time="2025-09-05T00:11:11.950224765Z" level=info msg="TearDown network for sandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\" successfully" Sep 5 00:11:11.955965 containerd[1460]: time="2025-09-05T00:11:11.955917136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:11:11.956072 containerd[1460]: time="2025-09-05T00:11:11.956002892Z" level=info msg="RemovePodSandbox \"4c17fed1c0b4573d24c56d6d1ce261f2a2d3749b2028b63485e027bb99164471\" returns successfully" Sep 5 00:11:11.956674 containerd[1460]: time="2025-09-05T00:11:11.956646075Z" level=info msg="StopPodSandbox for \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\"" Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:11.998 [WARNING][5288] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0", GenerateName:"calico-apiserver-56f98cb9dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dd17f3b-8a0c-44c8-8301-94b73eeeab5f", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f98cb9dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045", Pod:"calico-apiserver-56f98cb9dd-dggrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a91e43ea7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:11.999 [INFO][5288] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:11.999 [INFO][5288] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" iface="eth0" netns="" Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:11.999 [INFO][5288] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:11.999 [INFO][5288] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:12.030 [INFO][5296] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" HandleID="k8s-pod-network.d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:12.030 [INFO][5296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:12.030 [INFO][5296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:12.035 [WARNING][5296] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" HandleID="k8s-pod-network.d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:12.035 [INFO][5296] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" HandleID="k8s-pod-network.d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:12.038 [INFO][5296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:12.046123 containerd[1460]: 2025-09-05 00:11:12.041 [INFO][5288] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:12.049079 containerd[1460]: time="2025-09-05T00:11:12.049021127Z" level=info msg="TearDown network for sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\" successfully" Sep 5 00:11:12.049079 containerd[1460]: time="2025-09-05T00:11:12.049062937Z" level=info msg="StopPodSandbox for \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\" returns successfully" Sep 5 00:11:12.049933 containerd[1460]: time="2025-09-05T00:11:12.049675871Z" level=info msg="RemovePodSandbox for \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\"" Sep 5 00:11:12.052332 containerd[1460]: time="2025-09-05T00:11:12.049987072Z" level=info msg="Forcibly stopping sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\"" Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.092 [WARNING][5313] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0", GenerateName:"calico-apiserver-56f98cb9dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"6dd17f3b-8a0c-44c8-8301-94b73eeeab5f", ResourceVersion:"1189", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56f98cb9dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60a7f44cd4f46e2db185d8040f8bad2e6a9ee0aa4f036b6f75e414fc248ea045", Pod:"calico-apiserver-56f98cb9dd-dggrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2a91e43ea7d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.092 [INFO][5313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.092 [INFO][5313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" iface="eth0" netns="" Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.092 [INFO][5313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.092 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.146 [INFO][5322] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" HandleID="k8s-pod-network.d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.150 [INFO][5322] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.150 [INFO][5322] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.159 [WARNING][5322] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" HandleID="k8s-pod-network.d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.160 [INFO][5322] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" HandleID="k8s-pod-network.d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Workload="localhost-k8s-calico--apiserver--56f98cb9dd--dggrz-eth0" Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.163 [INFO][5322] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:12.171721 containerd[1460]: 2025-09-05 00:11:12.167 [INFO][5313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9" Sep 5 00:11:12.171721 containerd[1460]: time="2025-09-05T00:11:12.171610353Z" level=info msg="TearDown network for sandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\" successfully" Sep 5 00:11:12.177685 containerd[1460]: time="2025-09-05T00:11:12.177627823Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:11:12.177835 containerd[1460]: time="2025-09-05T00:11:12.177714791Z" level=info msg="RemovePodSandbox \"d1c783e1b0f12619082e80f67d3f87f4feabead793c44d92891834c6363806d9\" returns successfully" Sep 5 00:11:12.178528 containerd[1460]: time="2025-09-05T00:11:12.178492774Z" level=info msg="StopPodSandbox for \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\"" Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.239 [WARNING][5340] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--477dw-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b497f3e8-1406-4da9-8e71-b2f813307a42", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777", Pod:"goldmane-54d579b49d-477dw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia3f67cbff20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.239 [INFO][5340] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.239 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" iface="eth0" netns="" Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.239 [INFO][5340] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.239 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.267 [INFO][5352] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" HandleID="k8s-pod-network.3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.267 [INFO][5352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.268 [INFO][5352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.275 [WARNING][5352] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" HandleID="k8s-pod-network.3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.275 [INFO][5352] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" HandleID="k8s-pod-network.3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.276 [INFO][5352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:12.285197 containerd[1460]: 2025-09-05 00:11:12.281 [INFO][5340] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:12.285976 containerd[1460]: time="2025-09-05T00:11:12.285280657Z" level=info msg="TearDown network for sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\" successfully" Sep 5 00:11:12.285976 containerd[1460]: time="2025-09-05T00:11:12.285315224Z" level=info msg="StopPodSandbox for \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\" returns successfully" Sep 5 00:11:12.285976 containerd[1460]: time="2025-09-05T00:11:12.285870425Z" level=info msg="RemovePodSandbox for \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\"" Sep 5 00:11:12.285976 containerd[1460]: time="2025-09-05T00:11:12.285900464Z" level=info msg="Forcibly stopping sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\"" Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.327 [WARNING][5371] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--477dw-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"b497f3e8-1406-4da9-8e71-b2f813307a42", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777", Pod:"goldmane-54d579b49d-477dw", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia3f67cbff20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.327 [INFO][5371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.328 [INFO][5371] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" iface="eth0" netns="" Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.328 [INFO][5371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.328 [INFO][5371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.359 [INFO][5379] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" HandleID="k8s-pod-network.3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.359 [INFO][5379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.359 [INFO][5379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.365 [WARNING][5379] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" HandleID="k8s-pod-network.3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.365 [INFO][5379] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" HandleID="k8s-pod-network.3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Workload="localhost-k8s-goldmane--54d579b49d--477dw-eth0" Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.366 [INFO][5379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:12.372555 containerd[1460]: 2025-09-05 00:11:12.369 [INFO][5371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059" Sep 5 00:11:12.373012 containerd[1460]: time="2025-09-05T00:11:12.372626857Z" level=info msg="TearDown network for sandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\" successfully" Sep 5 00:11:12.377233 containerd[1460]: time="2025-09-05T00:11:12.377174319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:11:12.377310 containerd[1460]: time="2025-09-05T00:11:12.377254293Z" level=info msg="RemovePodSandbox \"3b09a90a14d85859f47bebd793f693485e0c7b7da0c9f6c2019e25158c335059\" returns successfully" Sep 5 00:11:12.378033 containerd[1460]: time="2025-09-05T00:11:12.377993801Z" level=info msg="StopPodSandbox for \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\"" Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.415 [WARNING][5399] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" WorkloadEndpoint="localhost-k8s-whisker--84f57bb86d--24ltq-eth0" Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.415 [INFO][5399] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.415 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" iface="eth0" netns="" Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.415 [INFO][5399] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.415 [INFO][5399] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.439 [INFO][5407] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" HandleID="k8s-pod-network.4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Workload="localhost-k8s-whisker--84f57bb86d--24ltq-eth0" Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.440 [INFO][5407] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.440 [INFO][5407] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.448 [WARNING][5407] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" HandleID="k8s-pod-network.4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Workload="localhost-k8s-whisker--84f57bb86d--24ltq-eth0" Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.449 [INFO][5407] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" HandleID="k8s-pod-network.4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Workload="localhost-k8s-whisker--84f57bb86d--24ltq-eth0" Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.450 [INFO][5407] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:12.459826 containerd[1460]: 2025-09-05 00:11:12.456 [INFO][5399] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:12.459826 containerd[1460]: time="2025-09-05T00:11:12.459689219Z" level=info msg="TearDown network for sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\" successfully" Sep 5 00:11:12.459826 containerd[1460]: time="2025-09-05T00:11:12.459723446Z" level=info msg="StopPodSandbox for \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\" returns successfully" Sep 5 00:11:12.460899 containerd[1460]: time="2025-09-05T00:11:12.460283557Z" level=info msg="RemovePodSandbox for \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\"" Sep 5 00:11:12.460899 containerd[1460]: time="2025-09-05T00:11:12.460319877Z" level=info msg="Forcibly stopping sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\"" Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.499 [WARNING][5425] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" WorkloadEndpoint="localhost-k8s-whisker--84f57bb86d--24ltq-eth0" Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.499 [INFO][5425] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.499 [INFO][5425] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" iface="eth0" netns="" Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.499 [INFO][5425] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.499 [INFO][5425] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.531 [INFO][5434] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" HandleID="k8s-pod-network.4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Workload="localhost-k8s-whisker--84f57bb86d--24ltq-eth0" Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.531 [INFO][5434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.531 [INFO][5434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.537 [WARNING][5434] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" HandleID="k8s-pod-network.4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Workload="localhost-k8s-whisker--84f57bb86d--24ltq-eth0" Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.537 [INFO][5434] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" HandleID="k8s-pod-network.4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Workload="localhost-k8s-whisker--84f57bb86d--24ltq-eth0" Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.539 [INFO][5434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:12.547525 containerd[1460]: 2025-09-05 00:11:12.543 [INFO][5425] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1" Sep 5 00:11:12.547525 containerd[1460]: time="2025-09-05T00:11:12.547100936Z" level=info msg="TearDown network for sandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\" successfully" Sep 5 00:11:12.564996 containerd[1460]: time="2025-09-05T00:11:12.564930345Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:11:12.565083 containerd[1460]: time="2025-09-05T00:11:12.565009469Z" level=info msg="RemovePodSandbox \"4cf8d1b42099fe397ace78df44bbe873dcdaa26bbeb15a3238afd05bde9bf5e1\" returns successfully" Sep 5 00:11:12.565603 containerd[1460]: time="2025-09-05T00:11:12.565575080Z" level=info msg="StopPodSandbox for \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\"" Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.604 [WARNING][5451] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0", GenerateName:"calico-kube-controllers-5fb59d4ff4-", Namespace:"calico-system", SelfLink:"", UID:"44b99312-546c-47bf-b6a7-75de1f36f388", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb59d4ff4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7", Pod:"calico-kube-controllers-5fb59d4ff4-m7trx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0776b1d81a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.604 [INFO][5451] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.604 [INFO][5451] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" iface="eth0" netns="" Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.604 [INFO][5451] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.604 [INFO][5451] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.630 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" HandleID="k8s-pod-network.09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.630 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.630 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.637 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" HandleID="k8s-pod-network.09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.637 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" HandleID="k8s-pod-network.09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.639 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:12.646114 containerd[1460]: 2025-09-05 00:11:12.642 [INFO][5451] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:12.646747 containerd[1460]: time="2025-09-05T00:11:12.646179943Z" level=info msg="TearDown network for sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\" successfully" Sep 5 00:11:12.646747 containerd[1460]: time="2025-09-05T00:11:12.646216333Z" level=info msg="StopPodSandbox for \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\" returns successfully" Sep 5 00:11:12.646864 containerd[1460]: time="2025-09-05T00:11:12.646805431Z" level=info msg="RemovePodSandbox for \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\"" Sep 5 00:11:12.646864 containerd[1460]: time="2025-09-05T00:11:12.646854506Z" level=info msg="Forcibly stopping sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\"" Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.685 [WARNING][5477] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0", GenerateName:"calico-kube-controllers-5fb59d4ff4-", Namespace:"calico-system", SelfLink:"", UID:"44b99312-546c-47bf-b6a7-75de1f36f388", ResourceVersion:"1099", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb59d4ff4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7", Pod:"calico-kube-controllers-5fb59d4ff4-m7trx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0776b1d81a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.686 [INFO][5477] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.686 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" iface="eth0" netns="" Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.686 [INFO][5477] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.686 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.708 [INFO][5485] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" HandleID="k8s-pod-network.09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.708 [INFO][5485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.709 [INFO][5485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.715 [WARNING][5485] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" HandleID="k8s-pod-network.09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.716 [INFO][5485] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" HandleID="k8s-pod-network.09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Workload="localhost-k8s-calico--kube--controllers--5fb59d4ff4--m7trx-eth0" Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.717 [INFO][5485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:12.724153 containerd[1460]: 2025-09-05 00:11:12.721 [INFO][5477] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99" Sep 5 00:11:12.725380 containerd[1460]: time="2025-09-05T00:11:12.724140509Z" level=info msg="TearDown network for sandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\" successfully" Sep 5 00:11:12.730190 containerd[1460]: time="2025-09-05T00:11:12.730136507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:11:12.730276 containerd[1460]: time="2025-09-05T00:11:12.730255728Z" level=info msg="RemovePodSandbox \"09a37072ecb7f9bfe02a033a3bafa2d9db0db417493950b036afe9c292650d99\" returns successfully" Sep 5 00:11:12.730845 containerd[1460]: time="2025-09-05T00:11:12.730800199Z" level=info msg="StopPodSandbox for \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\"" Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.785 [WARNING][5503] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1fa28034-a693-41d8-9eae-06ce071ba306", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93", Pod:"coredns-674b8bbfcf-fmz4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57521ff7f58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.785 [INFO][5503] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.785 [INFO][5503] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" iface="eth0" netns="" Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.785 [INFO][5503] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.785 [INFO][5503] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.812 [INFO][5511] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" HandleID="k8s-pod-network.16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.812 [INFO][5511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.812 [INFO][5511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.819 [WARNING][5511] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" HandleID="k8s-pod-network.16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.819 [INFO][5511] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" HandleID="k8s-pod-network.16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.821 [INFO][5511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:12.830531 containerd[1460]: 2025-09-05 00:11:12.825 [INFO][5503] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:12.831134 containerd[1460]: time="2025-09-05T00:11:12.830595189Z" level=info msg="TearDown network for sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\" successfully" Sep 5 00:11:12.831134 containerd[1460]: time="2025-09-05T00:11:12.830653010Z" level=info msg="StopPodSandbox for \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\" returns successfully" Sep 5 00:11:12.831390 containerd[1460]: time="2025-09-05T00:11:12.831344796Z" level=info msg="RemovePodSandbox for \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\"" Sep 5 00:11:12.831447 containerd[1460]: time="2025-09-05T00:11:12.831390705Z" level=info msg="Forcibly stopping sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\"" Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.875 [WARNING][5529] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1fa28034-a693-41d8-9eae-06ce071ba306", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9455b0b17d3128e797c16920b03501b8d01d2f62cf93d497ad559b2aae270d93", Pod:"coredns-674b8bbfcf-fmz4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali57521ff7f58", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.876 [INFO][5529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.876 [INFO][5529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" iface="eth0" netns="" Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.876 [INFO][5529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.876 [INFO][5529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.910 [INFO][5538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" HandleID="k8s-pod-network.16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.910 [INFO][5538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.910 [INFO][5538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.915 [WARNING][5538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" HandleID="k8s-pod-network.16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.915 [INFO][5538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" HandleID="k8s-pod-network.16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Workload="localhost-k8s-coredns--674b8bbfcf--fmz4m-eth0" Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.917 [INFO][5538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:12.923615 containerd[1460]: 2025-09-05 00:11:12.920 [INFO][5529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281" Sep 5 00:11:12.924097 containerd[1460]: time="2025-09-05T00:11:12.923682716Z" level=info msg="TearDown network for sandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\" successfully" Sep 5 00:11:12.946942 containerd[1460]: time="2025-09-05T00:11:12.946854221Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:11:12.947151 containerd[1460]: time="2025-09-05T00:11:12.946988501Z" level=info msg="RemovePodSandbox \"16ca68d76785b119e64021bc84f87560704bff7aca4f40d7059be308cfc00281\" returns successfully" Sep 5 00:11:12.947876 containerd[1460]: time="2025-09-05T00:11:12.947810027Z" level=info msg="StopPodSandbox for \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\"" Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:12.990 [WARNING][5555] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--54j5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4aa0d59-f65d-4bd5-953e-3a3464571ba3", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a", Pod:"csi-node-driver-54j5k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf855033e21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:12.990 [INFO][5555] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:12.990 [INFO][5555] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" iface="eth0" netns="" Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:12.991 [INFO][5555] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:12.991 [INFO][5555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:13.021 [INFO][5564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" HandleID="k8s-pod-network.28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:13.022 [INFO][5564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:13.022 [INFO][5564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:13.029 [WARNING][5564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" HandleID="k8s-pod-network.28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:13.029 [INFO][5564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" HandleID="k8s-pod-network.28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:13.031 [INFO][5564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:13.039919 containerd[1460]: 2025-09-05 00:11:13.037 [INFO][5555] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:13.039919 containerd[1460]: time="2025-09-05T00:11:13.039877894Z" level=info msg="TearDown network for sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\" successfully" Sep 5 00:11:13.039919 containerd[1460]: time="2025-09-05T00:11:13.039905076Z" level=info msg="StopPodSandbox for \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\" returns successfully" Sep 5 00:11:13.041039 containerd[1460]: time="2025-09-05T00:11:13.040387958Z" level=info msg="RemovePodSandbox for \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\"" Sep 5 00:11:13.041039 containerd[1460]: time="2025-09-05T00:11:13.040487700Z" level=info msg="Forcibly stopping sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\"" Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.085 [WARNING][5582] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--54j5k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d4aa0d59-f65d-4bd5-953e-3a3464571ba3", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a", Pod:"csi-node-driver-54j5k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibf855033e21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.086 [INFO][5582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.086 [INFO][5582] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" iface="eth0" netns="" Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.086 [INFO][5582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.086 [INFO][5582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.110 [INFO][5591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" HandleID="k8s-pod-network.28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.110 [INFO][5591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.110 [INFO][5591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.116 [WARNING][5591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" HandleID="k8s-pod-network.28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.116 [INFO][5591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" HandleID="k8s-pod-network.28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Workload="localhost-k8s-csi--node--driver--54j5k-eth0" Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.118 [INFO][5591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:13.124415 containerd[1460]: 2025-09-05 00:11:13.121 [INFO][5582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29" Sep 5 00:11:13.125043 containerd[1460]: time="2025-09-05T00:11:13.124486046Z" level=info msg="TearDown network for sandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\" successfully" Sep 5 00:11:13.323065 containerd[1460]: time="2025-09-05T00:11:13.322925806Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:11:13.323065 containerd[1460]: time="2025-09-05T00:11:13.322993657Z" level=info msg="RemovePodSandbox \"28a6ee85f3379e87a235ef3e4ef99fb835351895146b3ac5b5a4c2ed1fc9cf29\" returns successfully" Sep 5 00:11:13.323582 containerd[1460]: time="2025-09-05T00:11:13.323461289Z" level=info msg="StopPodSandbox for \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\"" Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.370 [WARNING][5608] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c644f92f-3ab8-4f91-9628-5d21f3b334b2", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0", Pod:"coredns-674b8bbfcf-fnhh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a6d448079a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.371 [INFO][5608] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.371 [INFO][5608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" iface="eth0" netns="" Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.371 [INFO][5608] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.371 [INFO][5608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.398 [INFO][5621] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" HandleID="k8s-pod-network.7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.399 [INFO][5621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.399 [INFO][5621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.407 [WARNING][5621] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" HandleID="k8s-pod-network.7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.407 [INFO][5621] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" HandleID="k8s-pod-network.7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.409 [INFO][5621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:13.416029 containerd[1460]: 2025-09-05 00:11:13.412 [INFO][5608] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:13.416950 containerd[1460]: time="2025-09-05T00:11:13.416271126Z" level=info msg="TearDown network for sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\" successfully" Sep 5 00:11:13.416950 containerd[1460]: time="2025-09-05T00:11:13.416304360Z" level=info msg="StopPodSandbox for \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\" returns successfully" Sep 5 00:11:13.416950 containerd[1460]: time="2025-09-05T00:11:13.416842499Z" level=info msg="RemovePodSandbox for \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\"" Sep 5 00:11:13.416950 containerd[1460]: time="2025-09-05T00:11:13.416872486Z" level=info msg="Forcibly stopping sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\"" Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.463 [WARNING][5639] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c644f92f-3ab8-4f91-9628-5d21f3b334b2", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 10, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9497ba8aad8f2334da3d346605add1a51cb50f07d4eb6849dde729fcff237ae0", Pod:"coredns-674b8bbfcf-fnhh4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7a6d448079a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.463 [INFO][5639] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.464 [INFO][5639] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" iface="eth0" netns="" Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.464 [INFO][5639] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.464 [INFO][5639] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.493 [INFO][5648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" HandleID="k8s-pod-network.7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.493 [INFO][5648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.493 [INFO][5648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.509 [WARNING][5648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" HandleID="k8s-pod-network.7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.509 [INFO][5648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" HandleID="k8s-pod-network.7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Workload="localhost-k8s-coredns--674b8bbfcf--fnhh4-eth0" Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.511 [INFO][5648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:11:13.525557 containerd[1460]: 2025-09-05 00:11:13.517 [INFO][5639] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508" Sep 5 00:11:13.525557 containerd[1460]: time="2025-09-05T00:11:13.523697235Z" level=info msg="TearDown network for sandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\" successfully" Sep 5 00:11:13.529199 containerd[1460]: time="2025-09-05T00:11:13.529134386Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:11:13.540921 containerd[1460]: time="2025-09-05T00:11:13.540835082Z" level=info msg="RemovePodSandbox \"7fbf155a8f0db9a7c079777243e8312386f612be675cc563542959d8a3f27508\" returns successfully" Sep 5 00:11:13.741985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534500915.mount: Deactivated successfully. Sep 5 00:11:14.448362 systemd[1]: Started sshd@13-10.0.0.52:22-10.0.0.1:48682.service - OpenSSH per-connection server daemon (10.0.0.1:48682). Sep 5 00:11:15.691633 containerd[1460]: time="2025-09-05T00:11:15.691516955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 5 00:11:15.692509 containerd[1460]: time="2025-09-05T00:11:15.692486122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:15.695877 containerd[1460]: time="2025-09-05T00:11:15.695836637Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:15.696404 containerd[1460]: time="2025-09-05T00:11:15.696360566Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.223454972s" Sep 5 00:11:15.696498 containerd[1460]: time="2025-09-05T00:11:15.696404632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 5 00:11:15.697040 containerd[1460]: time="2025-09-05T00:11:15.697000228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:15.700870 containerd[1460]: time="2025-09-05T00:11:15.700831751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:11:15.707966 containerd[1460]: time="2025-09-05T00:11:15.707880731Z" level=info msg="CreateContainer within sandbox \"e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:11:15.725117 sshd[5663]: Accepted publickey for core from 10.0.0.1 port 48682 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:15.728095 sshd[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:15.734925 systemd-logind[1443]: New session 14 of user core. Sep 5 00:11:15.749473 kernel: hrtimer: interrupt took 421922518 ns Sep 5 00:11:15.756870 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:11:15.765850 containerd[1460]: time="2025-09-05T00:11:15.765234174Z" level=info msg="CreateContainer within sandbox \"e221917548d90f89ae826a0ca28a3d5f393f940eae73132d69d5dd440a119777\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fbc4a05047996203211227488a4dbdc3740236da7be4d07485ec96be9defad31\"" Sep 5 00:11:15.767525 containerd[1460]: time="2025-09-05T00:11:15.766781475Z" level=info msg="StartContainer for \"fbc4a05047996203211227488a4dbdc3740236da7be4d07485ec96be9defad31\"" Sep 5 00:11:15.814885 systemd[1]: Started cri-containerd-fbc4a05047996203211227488a4dbdc3740236da7be4d07485ec96be9defad31.scope - libcontainer container fbc4a05047996203211227488a4dbdc3740236da7be4d07485ec96be9defad31. Sep 5 00:11:15.879610 containerd[1460]: time="2025-09-05T00:11:15.876820160Z" level=info msg="StartContainer for \"fbc4a05047996203211227488a4dbdc3740236da7be4d07485ec96be9defad31\" returns successfully" Sep 5 00:11:15.948715 sshd[5663]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:15.954159 systemd[1]: sshd@13-10.0.0.52:22-10.0.0.1:48682.service: Deactivated successfully. Sep 5 00:11:15.956720 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:11:15.957686 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:11:15.959074 systemd-logind[1443]: Removed session 14. Sep 5 00:11:16.162243 kubelet[2509]: I0905 00:11:16.161814 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-477dw" podStartSLOduration=37.656339067 podStartE2EDuration="47.161760403s" podCreationTimestamp="2025-09-05 00:10:29 +0000 UTC" firstStartedPulling="2025-09-05 00:11:06.193623171 +0000 UTC m=+54.727702759" lastFinishedPulling="2025-09-05 00:11:15.699044517 +0000 UTC m=+64.233124095" observedRunningTime="2025-09-05 00:11:16.16164945 +0000 UTC m=+64.695729028" watchObservedRunningTime="2025-09-05 00:11:16.161760403 +0000 UTC m=+64.695839981" Sep 5 00:11:20.744835 containerd[1460]: time="2025-09-05T00:11:20.744732660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:20.747028 containerd[1460]: time="2025-09-05T00:11:20.746948204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 5 00:11:20.749090 containerd[1460]: time="2025-09-05T00:11:20.748999392Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:20.761876 containerd[1460]: time="2025-09-05T00:11:20.761769380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:20.762792 containerd[1460]: time="2025-09-05T00:11:20.762707791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.061825774s" Sep 5 00:11:20.762792 containerd[1460]: time="2025-09-05T00:11:20.762781683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 5 00:11:20.771546 containerd[1460]: time="2025-09-05T00:11:20.771492797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:11:20.790278 containerd[1460]: time="2025-09-05T00:11:20.790222086Z" level=info msg="CreateContainer within sandbox \"74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:11:20.808282 containerd[1460]: time="2025-09-05T00:11:20.808216033Z" level=info msg="CreateContainer within sandbox \"74f63ee9afa337a55c65f277abdcde2c4d272f59f1be7f6bd33eb1910dcd58d7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"02d11bfe5561d1d475e2158ffda290e0ababba27ba85a7ef8e8ca304325449dc\"" Sep 5 00:11:20.809242 containerd[1460]: time="2025-09-05T00:11:20.809200133Z" level=info msg="StartContainer for \"02d11bfe5561d1d475e2158ffda290e0ababba27ba85a7ef8e8ca304325449dc\"" Sep 5 00:11:20.845704 systemd[1]: Started cri-containerd-02d11bfe5561d1d475e2158ffda290e0ababba27ba85a7ef8e8ca304325449dc.scope - libcontainer container 02d11bfe5561d1d475e2158ffda290e0ababba27ba85a7ef8e8ca304325449dc. Sep 5 00:11:20.894483 containerd[1460]: time="2025-09-05T00:11:20.894384410Z" level=info msg="StartContainer for \"02d11bfe5561d1d475e2158ffda290e0ababba27ba85a7ef8e8ca304325449dc\" returns successfully" Sep 5 00:11:20.965614 systemd[1]: Started sshd@14-10.0.0.52:22-10.0.0.1:37722.service - OpenSSH per-connection server daemon (10.0.0.1:37722). Sep 5 00:11:21.033909 sshd[5814]: Accepted publickey for core from 10.0.0.1 port 37722 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:21.036111 sshd[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:21.040959 systemd-logind[1443]: New session 15 of user core. Sep 5 00:11:21.052780 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:11:21.164993 kubelet[2509]: I0905 00:11:21.164363 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5fb59d4ff4-m7trx" podStartSLOduration=36.664953914 podStartE2EDuration="51.164334175s" podCreationTimestamp="2025-09-05 00:10:30 +0000 UTC" firstStartedPulling="2025-09-05 00:11:06.27185903 +0000 UTC m=+54.805938608" lastFinishedPulling="2025-09-05 00:11:20.771239271 +0000 UTC m=+69.305318869" observedRunningTime="2025-09-05 00:11:21.15600798 +0000 UTC m=+69.690087558" watchObservedRunningTime="2025-09-05 00:11:21.164334175 +0000 UTC m=+69.698413753" Sep 5 00:11:21.423957 sshd[5814]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:21.428630 systemd[1]: sshd@14-10.0.0.52:22-10.0.0.1:37722.service: Deactivated successfully. Sep 5 00:11:21.430833 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:11:21.431465 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:11:21.432344 systemd-logind[1443]: Removed session 15. Sep 5 00:11:22.818674 containerd[1460]: time="2025-09-05T00:11:22.818614126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:22.819545 containerd[1460]: time="2025-09-05T00:11:22.819506206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 5 00:11:22.820874 containerd[1460]: time="2025-09-05T00:11:22.820844172Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:22.823253 containerd[1460]: time="2025-09-05T00:11:22.823204338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:22.824001 containerd[1460]: time="2025-09-05T00:11:22.823970907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.052428174s" Sep 5 00:11:22.824078 containerd[1460]: time="2025-09-05T00:11:22.824006967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 5 00:11:22.825829 containerd[1460]: time="2025-09-05T00:11:22.825804825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:11:22.829873 containerd[1460]: time="2025-09-05T00:11:22.829836124Z" level=info msg="CreateContainer within sandbox \"185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:11:22.901046 containerd[1460]: time="2025-09-05T00:11:22.900768961Z" level=info msg="CreateContainer within sandbox \"185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"737509092e6b5e44e422944ba0a2c93dca7ffda9e315f4e0c81829af5aa20211\"" Sep 5 00:11:22.904237 containerd[1460]: time="2025-09-05T00:11:22.903600501Z" level=info msg="StartContainer for \"737509092e6b5e44e422944ba0a2c93dca7ffda9e315f4e0c81829af5aa20211\"" Sep 5 00:11:23.046846 systemd[1]: Started cri-containerd-737509092e6b5e44e422944ba0a2c93dca7ffda9e315f4e0c81829af5aa20211.scope - libcontainer container 737509092e6b5e44e422944ba0a2c93dca7ffda9e315f4e0c81829af5aa20211. Sep 5 00:11:23.125346 containerd[1460]: time="2025-09-05T00:11:23.125102366Z" level=info msg="StartContainer for \"737509092e6b5e44e422944ba0a2c93dca7ffda9e315f4e0c81829af5aa20211\" returns successfully" Sep 5 00:11:26.436515 systemd[1]: Started sshd@15-10.0.0.52:22-10.0.0.1:37732.service - OpenSSH per-connection server daemon (10.0.0.1:37732). Sep 5 00:11:26.534007 sshd[5899]: Accepted publickey for core from 10.0.0.1 port 37732 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:26.536083 sshd[5899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:26.543483 systemd-logind[1443]: New session 16 of user core. Sep 5 00:11:26.557066 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:11:26.567775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331811925.mount: Deactivated successfully. Sep 5 00:11:27.609183 sshd[5899]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:27.614835 systemd[1]: sshd@15-10.0.0.52:22-10.0.0.1:37732.service: Deactivated successfully. Sep 5 00:11:27.617959 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:11:27.618773 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:11:27.619856 systemd-logind[1443]: Removed session 16. Sep 5 00:11:28.901392 containerd[1460]: time="2025-09-05T00:11:28.901321228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:28.945503 containerd[1460]: time="2025-09-05T00:11:28.945342705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 5 00:11:29.094982 containerd[1460]: time="2025-09-05T00:11:29.094852762Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:29.177737 containerd[1460]: time="2025-09-05T00:11:29.177646597Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:29.178535 containerd[1460]: time="2025-09-05T00:11:29.178498825Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 6.352663874s" Sep 5 00:11:29.178593 containerd[1460]: time="2025-09-05T00:11:29.178540926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 5 00:11:29.179714 containerd[1460]: time="2025-09-05T00:11:29.179479529Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:11:29.463230 containerd[1460]: time="2025-09-05T00:11:29.463088491Z" level=info msg="CreateContainer within sandbox \"a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:11:30.617558 containerd[1460]: time="2025-09-05T00:11:30.617496823Z" level=info msg="CreateContainer within sandbox \"a42b1821a1ab2a649be4b0a85f21135d2e9f84e0a97af5d694dea3731d8cf535\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"14223cc2f34136d4a4001a44e3bc04d324b428018cff1f40a2dca0b983a77360\"" Sep 5 00:11:30.618413 containerd[1460]: time="2025-09-05T00:11:30.618296060Z" level=info msg="StartContainer for \"14223cc2f34136d4a4001a44e3bc04d324b428018cff1f40a2dca0b983a77360\"" Sep 5 00:11:30.654592 systemd[1]: Started cri-containerd-14223cc2f34136d4a4001a44e3bc04d324b428018cff1f40a2dca0b983a77360.scope - libcontainer container 14223cc2f34136d4a4001a44e3bc04d324b428018cff1f40a2dca0b983a77360. Sep 5 00:11:30.833311 containerd[1460]: time="2025-09-05T00:11:30.833138559Z" level=info msg="StartContainer for \"14223cc2f34136d4a4001a44e3bc04d324b428018cff1f40a2dca0b983a77360\" returns successfully" Sep 5 00:11:31.339559 kubelet[2509]: I0905 00:11:31.339476 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7f4fc74d67-xp58z" podStartSLOduration=3.997283569 podStartE2EDuration="28.339456377s" podCreationTimestamp="2025-09-05 00:11:03 +0000 UTC" firstStartedPulling="2025-09-05 00:11:04.837130234 +0000 UTC m=+53.371209812" lastFinishedPulling="2025-09-05 00:11:29.179303042 +0000 UTC m=+77.713382620" observedRunningTime="2025-09-05 00:11:31.33905462 +0000 UTC m=+79.873134188" watchObservedRunningTime="2025-09-05 00:11:31.339456377 +0000 UTC m=+79.873535955" Sep 5 00:11:31.664663 kubelet[2509]: I0905 00:11:31.664297 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:11:32.302087 containerd[1460]: time="2025-09-05T00:11:32.301992996Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:32.303309 containerd[1460]: time="2025-09-05T00:11:32.303240165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 5 00:11:32.304684 containerd[1460]: time="2025-09-05T00:11:32.304627182Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:32.307209 containerd[1460]: time="2025-09-05T00:11:32.307143983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:11:32.308086 containerd[1460]: time="2025-09-05T00:11:32.308042487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.128527069s" Sep 5 00:11:32.308186 containerd[1460]: time="2025-09-05T00:11:32.308090298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 5 00:11:32.314418 containerd[1460]: time="2025-09-05T00:11:32.314370088Z" level=info msg="CreateContainer within sandbox \"185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:11:32.353731 containerd[1460]: time="2025-09-05T00:11:32.353413793Z" level=info msg="CreateContainer within sandbox \"185fce04a1658479ff360adc3365afc9a84174f5b1b1b18b751a6ff443b4123a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"26a67f7bdac7ead3c00008e2d5e552b62849e3812b738d597965c4efa3757d37\"" Sep 5 00:11:32.354560 containerd[1460]: time="2025-09-05T00:11:32.354523529Z" level=info msg="StartContainer for \"26a67f7bdac7ead3c00008e2d5e552b62849e3812b738d597965c4efa3757d37\"" Sep 5 00:11:32.401713 systemd[1]: Started cri-containerd-26a67f7bdac7ead3c00008e2d5e552b62849e3812b738d597965c4efa3757d37.scope - libcontainer container 26a67f7bdac7ead3c00008e2d5e552b62849e3812b738d597965c4efa3757d37. Sep 5 00:11:32.450714 containerd[1460]: time="2025-09-05T00:11:32.450637538Z" level=info msg="StartContainer for \"26a67f7bdac7ead3c00008e2d5e552b62849e3812b738d597965c4efa3757d37\" returns successfully" Sep 5 00:11:32.564030 kubelet[2509]: E0905 00:11:32.563863 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:32.621864 systemd[1]: Started sshd@16-10.0.0.52:22-10.0.0.1:47192.service - OpenSSH per-connection server daemon (10.0.0.1:47192). Sep 5 00:11:32.685070 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 47192 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:32.687627 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:32.693808 systemd-logind[1443]: New session 17 of user core. Sep 5 00:11:32.707829 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:11:33.212232 kubelet[2509]: I0905 00:11:33.212159 2509 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:11:33.213330 kubelet[2509]: I0905 00:11:33.213294 2509 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:11:33.338563 sshd[6006]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:33.345634 systemd[1]: sshd@16-10.0.0.52:22-10.0.0.1:47192.service: Deactivated successfully. Sep 5 00:11:33.350045 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:11:33.355029 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:11:33.356415 systemd-logind[1443]: Removed session 17. Sep 5 00:11:35.805901 kubelet[2509]: I0905 00:11:35.805795 2509 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-54j5k" podStartSLOduration=44.16553432 podStartE2EDuration="1m6.805771031s" podCreationTimestamp="2025-09-05 00:10:29 +0000 UTC" firstStartedPulling="2025-09-05 00:11:09.668984454 +0000 UTC m=+58.203064032" lastFinishedPulling="2025-09-05 00:11:32.309221165 +0000 UTC m=+80.843300743" observedRunningTime="2025-09-05 00:11:33.352355001 +0000 UTC m=+81.886434579" watchObservedRunningTime="2025-09-05 00:11:35.805771031 +0000 UTC m=+84.339850609" Sep 5 00:11:38.346765 systemd[1]: Started sshd@17-10.0.0.52:22-10.0.0.1:47208.service - OpenSSH per-connection server daemon (10.0.0.1:47208). Sep 5 00:11:38.383673 sshd[6045]: Accepted publickey for core from 10.0.0.1 port 47208 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:38.385505 sshd[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:38.389663 systemd-logind[1443]: New session 18 of user core. Sep 5 00:11:38.399573 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:11:38.552532 sshd[6045]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:38.562706 systemd[1]: sshd@17-10.0.0.52:22-10.0.0.1:47208.service: Deactivated successfully. Sep 5 00:11:38.565696 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:11:38.568185 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:11:38.573690 systemd[1]: Started sshd@18-10.0.0.52:22-10.0.0.1:47216.service - OpenSSH per-connection server daemon (10.0.0.1:47216). Sep 5 00:11:38.574897 systemd-logind[1443]: Removed session 18. Sep 5 00:11:38.613342 sshd[6060]: Accepted publickey for core from 10.0.0.1 port 47216 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:38.615486 sshd[6060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:38.620066 systemd-logind[1443]: New session 19 of user core. Sep 5 00:11:38.633655 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:11:38.844760 sshd[6060]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:38.857779 systemd[1]: sshd@18-10.0.0.52:22-10.0.0.1:47216.service: Deactivated successfully. Sep 5 00:11:38.860073 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:11:38.861834 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:11:38.868923 systemd[1]: Started sshd@19-10.0.0.52:22-10.0.0.1:47222.service - OpenSSH per-connection server daemon (10.0.0.1:47222). Sep 5 00:11:38.870545 systemd-logind[1443]: Removed session 19. Sep 5 00:11:38.900172 sshd[6073]: Accepted publickey for core from 10.0.0.1 port 47222 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:38.901984 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:38.906737 systemd-logind[1443]: New session 20 of user core. Sep 5 00:11:38.916697 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:11:39.436292 sshd[6073]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:39.448105 systemd[1]: sshd@19-10.0.0.52:22-10.0.0.1:47222.service: Deactivated successfully. Sep 5 00:11:39.452673 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:11:39.454414 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:11:39.461846 systemd[1]: Started sshd@20-10.0.0.52:22-10.0.0.1:47224.service - OpenSSH per-connection server daemon (10.0.0.1:47224). Sep 5 00:11:39.463606 systemd-logind[1443]: Removed session 20. Sep 5 00:11:39.503756 sshd[6092]: Accepted publickey for core from 10.0.0.1 port 47224 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:39.506103 sshd[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:39.511608 systemd-logind[1443]: New session 21 of user core. Sep 5 00:11:39.516612 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:11:39.847107 sshd[6092]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:39.858719 systemd[1]: sshd@20-10.0.0.52:22-10.0.0.1:47224.service: Deactivated successfully. Sep 5 00:11:39.861218 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:11:39.864087 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:11:39.870857 systemd[1]: Started sshd@21-10.0.0.52:22-10.0.0.1:47238.service - OpenSSH per-connection server daemon (10.0.0.1:47238). Sep 5 00:11:39.872375 systemd-logind[1443]: Removed session 21. Sep 5 00:11:39.903121 sshd[6106]: Accepted publickey for core from 10.0.0.1 port 47238 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:39.905310 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:39.911367 systemd-logind[1443]: New session 22 of user core. Sep 5 00:11:39.917639 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:11:40.039735 sshd[6106]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:40.044054 systemd[1]: sshd@21-10.0.0.52:22-10.0.0.1:47238.service: Deactivated successfully. Sep 5 00:11:40.046174 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:11:40.047014 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:11:40.048217 systemd-logind[1443]: Removed session 22. Sep 5 00:11:41.567379 kubelet[2509]: E0905 00:11:41.567321 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:43.566442 kubelet[2509]: E0905 00:11:43.566384 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:45.056477 systemd[1]: Started sshd@22-10.0.0.52:22-10.0.0.1:39222.service - OpenSSH per-connection server daemon (10.0.0.1:39222). Sep 5 00:11:45.099784 sshd[6120]: Accepted publickey for core from 10.0.0.1 port 39222 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:45.101562 sshd[6120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:45.105578 systemd-logind[1443]: New session 23 of user core. Sep 5 00:11:45.114556 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:11:45.328262 sshd[6120]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:45.332451 systemd[1]: sshd@22-10.0.0.52:22-10.0.0.1:39222.service: Deactivated successfully. Sep 5 00:11:45.334616 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:11:45.335332 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:11:45.336278 systemd-logind[1443]: Removed session 23. Sep 5 00:11:47.571030 kubelet[2509]: I0905 00:11:47.570976 2509 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:11:48.563854 kubelet[2509]: E0905 00:11:48.563801 2509 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:11:50.346673 systemd[1]: Started sshd@23-10.0.0.52:22-10.0.0.1:55682.service - OpenSSH per-connection server daemon (10.0.0.1:55682). Sep 5 00:11:50.385904 sshd[6171]: Accepted publickey for core from 10.0.0.1 port 55682 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:50.388056 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:50.392709 systemd-logind[1443]: New session 24 of user core. Sep 5 00:11:50.408707 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:11:50.846187 sshd[6171]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:50.851349 systemd[1]: sshd@23-10.0.0.52:22-10.0.0.1:55682.service: Deactivated successfully. Sep 5 00:11:50.853768 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:11:50.854740 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:11:50.856213 systemd-logind[1443]: Removed session 24. Sep 5 00:11:55.858472 systemd[1]: Started sshd@24-10.0.0.52:22-10.0.0.1:55698.service - OpenSSH per-connection server daemon (10.0.0.1:55698). Sep 5 00:11:55.903668 sshd[6207]: Accepted publickey for core from 10.0.0.1 port 55698 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:11:55.905594 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:11:55.909962 systemd-logind[1443]: New session 25 of user core. Sep 5 00:11:55.919570 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:11:56.044632 sshd[6207]: pam_unix(sshd:session): session closed for user core Sep 5 00:11:56.048963 systemd[1]: sshd@24-10.0.0.52:22-10.0.0.1:55698.service: Deactivated successfully. Sep 5 00:11:56.051576 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:11:56.052582 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:11:56.054379 systemd-logind[1443]: Removed session 25.