Sep 5 00:05:27.891927 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:33:49 -00 2025 Sep 5 00:05:27.891952 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:05:27.891963 kernel: BIOS-provided physical RAM map: Sep 5 00:05:27.891970 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 5 00:05:27.891976 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 5 00:05:27.891982 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 5 00:05:27.891990 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Sep 5 00:05:27.891996 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Sep 5 00:05:27.892002 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 00:05:27.892011 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 5 00:05:27.892017 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 5 00:05:27.892023 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 5 00:05:27.892030 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Sep 5 00:05:27.892036 kernel: NX (Execute Disable) protection: active Sep 5 00:05:27.892044 kernel: APIC: Static calls initialized Sep 5 00:05:27.892053 kernel: SMBIOS 2.8 present. Sep 5 00:05:27.892060 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Sep 5 00:05:27.892067 kernel: Hypervisor detected: KVM Sep 5 00:05:27.892073 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:05:27.892080 kernel: kvm-clock: using sched offset of 2497217693 cycles Sep 5 00:05:27.892087 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:05:27.892094 kernel: tsc: Detected 2794.750 MHz processor Sep 5 00:05:27.892101 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:05:27.892109 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:05:27.892116 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Sep 5 00:05:27.892125 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 5 00:05:27.892132 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:05:27.892139 kernel: Using GB pages for direct mapping Sep 5 00:05:27.892146 kernel: ACPI: Early table checksum verification disabled Sep 5 00:05:27.892152 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Sep 5 00:05:27.892159 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:05:27.892166 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:05:27.892173 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:05:27.892182 kernel: ACPI: FACS 0x000000009CFE0000 000040 Sep 5 00:05:27.892189 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:05:27.892196 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:05:27.892203 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:05:27.892211 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:05:27.892220 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Sep 5 00:05:27.892227 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Sep 5 00:05:27.892238 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Sep 5 00:05:27.892248 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Sep 5 00:05:27.892255 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Sep 5 00:05:27.892262 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Sep 5 00:05:27.892269 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Sep 5 00:05:27.892276 kernel: No NUMA configuration found Sep 5 00:05:27.892283 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Sep 5 00:05:27.892290 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Sep 5 00:05:27.892299 kernel: Zone ranges: Sep 5 00:05:27.892307 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:05:27.892314 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Sep 5 00:05:27.892321 kernel: Normal empty Sep 5 00:05:27.892328 kernel: Movable zone start for each node Sep 5 00:05:27.892335 kernel: Early memory node ranges Sep 5 00:05:27.892342 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 5 00:05:27.892349 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Sep 5 00:05:27.892368 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Sep 5 00:05:27.892377 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:05:27.892384 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 5 00:05:27.892392 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Sep 5 00:05:27.892399 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:05:27.892406 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:05:27.892413 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:05:27.892421 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:05:27.892428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:05:27.892435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:05:27.892446 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:05:27.892453 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:05:27.892461 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:05:27.892468 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:05:27.892490 kernel: TSC deadline timer available Sep 5 00:05:27.892497 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 5 00:05:27.892506 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:05:27.892514 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:05:27.892521 kernel: kvm-guest: setup PV sched yield Sep 5 00:05:27.892528 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 5 00:05:27.892538 kernel: Booting paravirtualized kernel on KVM Sep 5 00:05:27.892548 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:05:27.892556 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:05:27.892563 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 5 00:05:27.892570 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 5 00:05:27.892577 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:05:27.892584 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:05:27.892592 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:05:27.892601 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:05:27.892615 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:05:27.892622 kernel: random: crng init done Sep 5 00:05:27.892629 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:05:27.892637 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:05:27.892644 kernel: Fallback order for Node 0: 0 Sep 5 00:05:27.892651 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Sep 5 00:05:27.892658 kernel: Policy zone: DMA32 Sep 5 00:05:27.892665 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:05:27.892677 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42872K init, 2324K bss, 136900K reserved, 0K cma-reserved) Sep 5 00:05:27.892687 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:05:27.892694 kernel: ftrace: allocating 37969 entries in 149 pages Sep 5 00:05:27.892701 kernel: ftrace: allocated 149 pages with 4 groups Sep 5 00:05:27.892708 kernel: Dynamic Preempt: voluntary Sep 5 00:05:27.892716 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:05:27.892727 kernel: rcu: RCU event tracing is enabled. Sep 5 00:05:27.892735 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:05:27.892742 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:05:27.892754 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:05:27.892762 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:05:27.892769 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:05:27.892776 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:05:27.892783 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:05:27.892791 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:05:27.892798 kernel: Console: colour VGA+ 80x25 Sep 5 00:05:27.892805 kernel: printk: console [ttyS0] enabled Sep 5 00:05:27.892815 kernel: ACPI: Core revision 20230628 Sep 5 00:05:27.892828 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:05:27.892836 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:05:27.892843 kernel: x2apic enabled Sep 5 00:05:27.892850 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:05:27.892857 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:05:27.892864 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:05:27.892872 kernel: kvm-guest: setup PV IPIs Sep 5 00:05:27.892890 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:05:27.892898 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 5 00:05:27.892906 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Sep 5 00:05:27.892913 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:05:27.892921 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:05:27.892932 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:05:27.892940 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:05:27.892948 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:05:27.892956 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:05:27.892963 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:05:27.892973 kernel: active return thunk: retbleed_return_thunk Sep 5 00:05:27.892980 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:05:27.892988 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:05:27.892997 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:05:27.893005 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:05:27.893013 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:05:27.893021 kernel: active return thunk: srso_return_thunk Sep 5 00:05:27.893028 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:05:27.893038 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:05:27.893046 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:05:27.893055 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:05:27.893063 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:05:27.893070 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:05:27.893078 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:05:27.893087 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:05:27.893095 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:05:27.893102 kernel: landlock: Up and running. Sep 5 00:05:27.893112 kernel: SELinux: Initializing. Sep 5 00:05:27.893119 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:05:27.893127 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:05:27.893134 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:05:27.893144 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:05:27.893152 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:05:27.893160 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:05:27.893167 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:05:27.893175 kernel: ... version: 0 Sep 5 00:05:27.893184 kernel: ... bit width: 48 Sep 5 00:05:27.893192 kernel: ... generic registers: 6 Sep 5 00:05:27.893200 kernel: ... value mask: 0000ffffffffffff Sep 5 00:05:27.893209 kernel: ... max period: 00007fffffffffff Sep 5 00:05:27.893217 kernel: ... fixed-purpose events: 0 Sep 5 00:05:27.893224 kernel: ... event mask: 000000000000003f Sep 5 00:05:27.893233 kernel: signal: max sigframe size: 1776 Sep 5 00:05:27.893242 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:05:27.893249 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:05:27.893259 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:05:27.893266 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:05:27.893274 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:05:27.893281 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:05:27.893289 kernel: smpboot: Max logical packages: 1 Sep 5 00:05:27.893298 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Sep 5 00:05:27.893306 kernel: devtmpfs: initialized Sep 5 00:05:27.893313 kernel: x86/mm: Memory block size: 128MB Sep 5 00:05:27.893321 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:05:27.893330 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:05:27.893338 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:05:27.893346 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:05:27.893363 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:05:27.893374 kernel: audit: type=2000 audit(1757030727.346:1): state=initialized audit_enabled=0 res=1 Sep 5 00:05:27.893384 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:05:27.893394 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:05:27.893401 kernel: cpuidle: using governor menu Sep 5 00:05:27.893409 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:05:27.893419 kernel: dca service started, version 1.12.1 Sep 5 00:05:27.893429 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 5 00:05:27.893437 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 00:05:27.893445 kernel: PCI: Using configuration type 1 for base access Sep 5 00:05:27.893453 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:05:27.893460 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:05:27.893468 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:05:27.893489 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:05:27.893498 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:05:27.893508 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:05:27.893516 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:05:27.893523 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:05:27.893531 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:05:27.893538 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 5 00:05:27.893546 kernel: ACPI: Interpreter enabled Sep 5 00:05:27.893553 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:05:27.893561 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:05:27.893568 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:05:27.893579 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:05:27.893589 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:05:27.893598 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:05:27.893784 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:05:27.893921 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:05:27.894054 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:05:27.894065 kernel: PCI host bridge to bus 0000:00 Sep 5 00:05:27.894202 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:05:27.894321 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:05:27.894449 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:05:27.894585 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 00:05:27.894708 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 00:05:27.894825 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Sep 5 00:05:27.894938 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:05:27.895085 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 5 00:05:27.895238 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 5 00:05:27.895378 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Sep 5 00:05:27.895554 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Sep 5 00:05:27.895693 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Sep 5 00:05:27.895819 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:05:27.895958 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 00:05:27.896090 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 5 00:05:27.896218 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Sep 5 00:05:27.896344 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Sep 5 00:05:27.896578 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 5 00:05:27.896715 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Sep 5 00:05:27.896849 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Sep 5 00:05:27.896985 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Sep 5 00:05:27.897121 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 5 00:05:27.897250 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Sep 5 00:05:27.897392 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Sep 5 00:05:27.897544 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Sep 5 00:05:27.897673 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Sep 5 00:05:27.897801 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 5 00:05:27.897932 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:05:27.898074 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 5 00:05:27.898201 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Sep 5 00:05:27.898394 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Sep 5 00:05:27.898589 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 5 00:05:27.898718 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 5 00:05:27.898730 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:05:27.898743 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:05:27.898750 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:05:27.898758 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:05:27.898765 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:05:27.898773 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:05:27.898780 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:05:27.898788 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:05:27.898797 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:05:27.898808 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:05:27.898819 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:05:27.898827 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:05:27.898835 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:05:27.898842 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:05:27.898850 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:05:27.898857 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:05:27.898865 kernel: iommu: Default domain type: Translated Sep 5 00:05:27.898874 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:05:27.898882 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:05:27.898894 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:05:27.898904 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 5 00:05:27.898911 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Sep 5 00:05:27.899041 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:05:27.899168 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:05:27.899298 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:05:27.899309 kernel: vgaarb: loaded Sep 5 00:05:27.899317 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:05:27.899329 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:05:27.899340 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:05:27.899358 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:05:27.899366 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:05:27.899374 kernel: pnp: PnP ACPI init Sep 5 00:05:27.899548 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 00:05:27.899565 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:05:27.899575 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:05:27.899587 kernel: NET: Registered PF_INET protocol family Sep 5 00:05:27.899597 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:05:27.899604 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:05:27.899612 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:05:27.899622 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:05:27.899630 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:05:27.899638 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:05:27.899645 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:05:27.899653 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:05:27.899663 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:05:27.899672 kernel: NET: Registered PF_XDP protocol family Sep 5 00:05:27.899793 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:05:27.899922 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:05:27.900055 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:05:27.900228 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 00:05:27.900366 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 00:05:27.900519 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Sep 5 00:05:27.900537 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:05:27.900548 kernel: Initialise system trusted keyrings Sep 5 00:05:27.900558 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:05:27.900568 kernel: Key type asymmetric registered Sep 5 00:05:27.900575 kernel: Asymmetric key parser 'x509' registered Sep 5 00:05:27.900583 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 00:05:27.900591 kernel: io scheduler mq-deadline registered Sep 5 00:05:27.900600 kernel: io scheduler kyber registered Sep 5 00:05:27.900611 kernel: io scheduler bfq registered Sep 5 00:05:27.900628 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:05:27.900639 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:05:27.900649 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:05:27.900659 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:05:27.900672 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:05:27.900683 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:05:27.900692 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:05:27.900702 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:05:27.900711 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:05:27.900868 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:05:27.900999 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:05:27.901011 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:05:27.901129 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:05:27 UTC (1757030727) Sep 5 00:05:27.901256 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 00:05:27.901267 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:05:27.901274 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:05:27.901282 kernel: Segment Routing with IPv6 Sep 5 00:05:27.901298 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:05:27.901308 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:05:27.901317 kernel: Key type dns_resolver registered Sep 5 00:05:27.901324 kernel: IPI shorthand broadcast: enabled Sep 5 00:05:27.901332 kernel: sched_clock: Marking stable (631006112, 110496476)->(801343770, -59841182) Sep 5 00:05:27.901339 kernel: registered taskstats version 1 Sep 5 00:05:27.901347 kernel: Loading compiled-in X.509 certificates Sep 5 00:05:27.901363 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: fbb6a9f06c02a4dbdf06d4c5d95c782040e8492c' Sep 5 00:05:27.901373 kernel: Key type .fscrypt registered Sep 5 00:05:27.901386 kernel: Key type fscrypt-provisioning registered Sep 5 00:05:27.901396 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:05:27.901403 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:05:27.901411 kernel: ima: No architecture policies found Sep 5 00:05:27.901418 kernel: clk: Disabling unused clocks Sep 5 00:05:27.901426 kernel: Freeing unused kernel image (initmem) memory: 42872K Sep 5 00:05:27.901434 kernel: Write protecting the kernel read-only data: 36864k Sep 5 00:05:27.901441 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 5 00:05:27.901449 kernel: Run /init as init process Sep 5 00:05:27.901459 kernel: with arguments: Sep 5 00:05:27.901468 kernel: /init Sep 5 00:05:27.901489 kernel: with environment: Sep 5 00:05:27.901496 kernel: HOME=/ Sep 5 00:05:27.901504 kernel: TERM=linux Sep 5 00:05:27.901511 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:05:27.901521 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:05:27.901533 systemd[1]: Detected virtualization kvm. Sep 5 00:05:27.901547 systemd[1]: Detected architecture x86-64. Sep 5 00:05:27.901557 systemd[1]: Running in initrd. Sep 5 00:05:27.901571 systemd[1]: No hostname configured, using default hostname. Sep 5 00:05:27.901583 systemd[1]: Hostname set to . Sep 5 00:05:27.901594 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:05:27.901604 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:05:27.901614 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:05:27.901625 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:05:27.901639 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:05:27.901663 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:05:27.901676 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:05:27.901685 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:05:27.901697 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:05:27.901708 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:05:27.901716 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:05:27.901725 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:05:27.901735 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:05:27.901743 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:05:27.901752 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:05:27.901760 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:05:27.901770 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:05:27.901781 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:05:27.901790 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:05:27.901798 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:05:27.901807 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:05:27.901815 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:05:27.901824 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:05:27.901834 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:05:27.901842 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:05:27.901851 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:05:27.901861 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:05:27.901870 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:05:27.901880 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:05:27.901889 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:05:27.901897 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:05:27.901908 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:05:27.901921 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:05:27.901929 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:05:27.901960 systemd-journald[191]: Collecting audit messages is disabled. Sep 5 00:05:27.901983 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:05:27.901995 systemd-journald[191]: Journal started Sep 5 00:05:27.902016 systemd-journald[191]: Runtime Journal (/run/log/journal/9c9aaa5a7d4345898955333d32cdb23e) is 6.0M, max 48.4M, 42.3M free. Sep 5 00:05:27.900542 systemd-modules-load[194]: Inserted module 'overlay' Sep 5 00:05:27.929512 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:05:27.933488 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:05:27.936654 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 5 00:05:27.937566 kernel: Bridge firewalling registered Sep 5 00:05:27.940008 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:05:27.941341 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:05:27.945384 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:05:27.948571 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:05:27.951602 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:05:27.958030 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:05:27.961109 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:05:27.965733 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:05:27.983811 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:05:27.986685 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:05:27.988330 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:05:27.991920 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:05:27.993150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:05:28.006131 dracut-cmdline[226]: dracut-dracut-053 Sep 5 00:05:28.009364 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:05:28.039773 systemd-resolved[227]: Positive Trust Anchors: Sep 5 00:05:28.039788 systemd-resolved[227]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:05:28.039819 systemd-resolved[227]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:05:28.050203 systemd-resolved[227]: Defaulting to hostname 'linux'. Sep 5 00:05:28.052029 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:05:28.052195 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:05:28.096501 kernel: SCSI subsystem initialized Sep 5 00:05:28.105509 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:05:28.116511 kernel: iscsi: registered transport (tcp) Sep 5 00:05:28.137508 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:05:28.137533 kernel: QLogic iSCSI HBA Driver Sep 5 00:05:28.192641 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:05:28.204612 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:05:28.233500 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:05:28.233539 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:05:28.234500 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:05:28.275524 kernel: raid6: avx2x4 gen() 29840 MB/s Sep 5 00:05:28.292505 kernel: raid6: avx2x2 gen() 30420 MB/s Sep 5 00:05:28.309542 kernel: raid6: avx2x1 gen() 25770 MB/s Sep 5 00:05:28.309565 kernel: raid6: using algorithm avx2x2 gen() 30420 MB/s Sep 5 00:05:28.327577 kernel: raid6: .... xor() 19694 MB/s, rmw enabled Sep 5 00:05:28.327600 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:05:28.348502 kernel: xor: automatically using best checksumming function avx Sep 5 00:05:28.503502 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:05:28.516628 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:05:28.529618 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:05:28.543848 systemd-udevd[411]: Using default interface naming scheme 'v255'. Sep 5 00:05:28.548457 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:05:28.561602 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:05:28.576203 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 5 00:05:28.609124 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:05:28.622637 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:05:28.684531 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:05:28.694612 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:05:28.706393 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:05:28.709543 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:05:28.710837 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:05:28.713137 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:05:28.724503 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:05:28.725659 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:05:28.730691 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:05:28.732513 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:05:28.739233 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:05:28.739257 kernel: GPT:9289727 != 19775487 Sep 5 00:05:28.739268 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:05:28.739279 kernel: GPT:9289727 != 19775487 Sep 5 00:05:28.739289 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:05:28.739299 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:05:28.736077 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:05:28.744667 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 00:05:28.744689 kernel: AES CTR mode by8 optimization enabled Sep 5 00:05:28.758794 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:05:28.758987 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:05:28.763415 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:05:28.770844 kernel: libata version 3.00 loaded. Sep 5 00:05:28.765026 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:05:28.765223 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:05:28.770874 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:05:28.779108 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Sep 5 00:05:28.781498 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:05:28.781696 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:05:28.785956 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 5 00:05:28.786214 kernel: BTRFS: device fsid 3713859d-e283-4add-80dc-7ca8465b1d1d devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (473) Sep 5 00:05:28.786239 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:05:28.784712 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:05:28.802503 kernel: scsi host0: ahci Sep 5 00:05:28.802782 kernel: scsi host1: ahci Sep 5 00:05:28.804504 kernel: scsi host2: ahci Sep 5 00:05:28.806489 kernel: scsi host3: ahci Sep 5 00:05:28.806770 kernel: scsi host4: ahci Sep 5 00:05:28.806992 kernel: scsi host5: ahci Sep 5 00:05:28.807521 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Sep 5 00:05:28.807549 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Sep 5 00:05:28.807564 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Sep 5 00:05:28.807579 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Sep 5 00:05:28.807599 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Sep 5 00:05:28.807614 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Sep 5 00:05:28.811369 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:05:28.845202 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:05:28.845551 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:05:28.853026 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:05:28.853120 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:05:28.861905 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:05:28.875626 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:05:28.877496 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:05:28.886596 disk-uuid[567]: Primary Header is updated. Sep 5 00:05:28.886596 disk-uuid[567]: Secondary Entries is updated. Sep 5 00:05:28.886596 disk-uuid[567]: Secondary Header is updated. Sep 5 00:05:28.891506 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:05:28.897524 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:05:28.899655 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:05:29.120957 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:05:29.121045 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:05:29.121062 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:05:29.122520 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:05:29.122626 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:05:29.123513 kernel: ata3.00: applying bridge limits Sep 5 00:05:29.124512 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:05:29.125513 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:05:29.125539 kernel: ata3.00: configured for UDMA/100 Sep 5 00:05:29.126508 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:05:29.178033 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:05:29.178356 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:05:29.190493 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:05:29.896516 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:05:29.897381 disk-uuid[569]: The operation has completed successfully. Sep 5 00:05:29.926924 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:05:29.927045 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:05:29.948693 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:05:29.954099 sh[592]: Success Sep 5 00:05:29.967508 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 5 00:05:30.001743 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:05:30.019964 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:05:30.024019 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:05:30.036149 kernel: BTRFS info (device dm-0): first mount of filesystem 3713859d-e283-4add-80dc-7ca8465b1d1d Sep 5 00:05:30.036177 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:05:30.036188 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:05:30.037150 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:05:30.037877 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:05:30.042802 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:05:30.045116 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:05:30.061632 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:05:30.064627 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:05:30.072652 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:05:30.072680 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:05:30.072691 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:05:30.075490 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:05:30.085336 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 00:05:30.087247 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:05:30.097935 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:05:30.109697 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:05:30.173123 ignition[677]: Ignition 2.19.0 Sep 5 00:05:30.173136 ignition[677]: Stage: fetch-offline Sep 5 00:05:30.173176 ignition[677]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:05:30.173188 ignition[677]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:05:30.173285 ignition[677]: parsed url from cmdline: "" Sep 5 00:05:30.173290 ignition[677]: no config URL provided Sep 5 00:05:30.173307 ignition[677]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:05:30.173319 ignition[677]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:05:30.173349 ignition[677]: op(1): [started] loading QEMU firmware config module Sep 5 00:05:30.173356 ignition[677]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:05:30.185035 ignition[677]: op(1): [finished] loading QEMU firmware config module Sep 5 00:05:30.206561 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:05:30.219608 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:05:30.233123 ignition[677]: parsing config with SHA512: a1cfd5f00785af8fa04d6164c79881342f61541ca4ef4dd40cc52b0ce54cc9c6d8fcac00e467d861fe746074b2cd916ea1b6050da0158ec18d914058cab3f7f2 Sep 5 00:05:30.237468 unknown[677]: fetched base config from "system" Sep 5 00:05:30.237496 unknown[677]: fetched user config from "qemu" Sep 5 00:05:30.237920 ignition[677]: fetch-offline: fetch-offline passed Sep 5 00:05:30.237992 ignition[677]: Ignition finished successfully Sep 5 00:05:30.242762 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:05:30.246866 systemd-networkd[781]: lo: Link UP Sep 5 00:05:30.246876 systemd-networkd[781]: lo: Gained carrier Sep 5 00:05:30.248580 systemd-networkd[781]: Enumeration completed Sep 5 00:05:30.248719 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:05:30.249002 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:05:30.249006 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:05:30.250205 systemd[1]: Reached target network.target - Network. Sep 5 00:05:30.250449 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:05:30.251286 systemd-networkd[781]: eth0: Link UP Sep 5 00:05:30.251300 systemd-networkd[781]: eth0: Gained carrier Sep 5 00:05:30.251308 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:05:30.264761 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:05:30.270525 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:05:30.278823 ignition[784]: Ignition 2.19.0 Sep 5 00:05:30.278833 ignition[784]: Stage: kargs Sep 5 00:05:30.279011 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:05:30.279022 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:05:30.279819 ignition[784]: kargs: kargs passed Sep 5 00:05:30.279864 ignition[784]: Ignition finished successfully Sep 5 00:05:30.286631 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:05:30.303758 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:05:30.316183 ignition[793]: Ignition 2.19.0 Sep 5 00:05:30.316200 ignition[793]: Stage: disks Sep 5 00:05:30.316388 ignition[793]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:05:30.316400 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:05:30.320307 ignition[793]: disks: disks passed Sep 5 00:05:30.320928 ignition[793]: Ignition finished successfully Sep 5 00:05:30.323856 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:05:30.324130 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:05:30.326913 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:05:30.329108 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:05:30.329365 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:05:30.329875 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:05:30.345753 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:05:30.357922 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:05:30.365100 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:05:30.380626 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:05:30.470501 kernel: EXT4-fs (vda9): mounted filesystem 83287606-d110-4d13-a801-c8d88205bd5a r/w with ordered data mode. Quota mode: none. Sep 5 00:05:30.471029 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:05:30.472404 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:05:30.484555 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:05:30.486126 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:05:30.487449 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:05:30.487512 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:05:30.494651 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Sep 5 00:05:30.494671 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:05:30.487538 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:05:30.499300 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:05:30.499314 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:05:30.494730 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:05:30.501955 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:05:30.500667 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:05:30.503048 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:05:30.536966 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:05:30.542034 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:05:30.547019 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:05:30.551414 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:05:30.641638 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:05:30.657749 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:05:30.659084 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:05:30.668525 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:05:30.910764 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:05:30.923776 ignition[924]: INFO : Ignition 2.19.0 Sep 5 00:05:30.923776 ignition[924]: INFO : Stage: mount Sep 5 00:05:30.925831 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:05:30.925831 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:05:30.929176 ignition[924]: INFO : mount: mount passed Sep 5 00:05:30.930036 ignition[924]: INFO : Ignition finished successfully Sep 5 00:05:30.933249 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:05:30.945704 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:05:31.036369 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:05:31.047777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:05:31.055847 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Sep 5 00:05:31.055882 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:05:31.055898 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:05:31.056711 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:05:31.060506 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:05:31.062038 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:05:31.100672 ignition[954]: INFO : Ignition 2.19.0 Sep 5 00:05:31.100672 ignition[954]: INFO : Stage: files Sep 5 00:05:31.102673 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:05:31.102673 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:05:31.102673 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:05:31.107350 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:05:31.107350 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:05:31.107350 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:05:31.107350 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:05:31.112825 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:05:31.112825 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 5 00:05:31.112825 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 5 00:05:31.107709 unknown[954]: wrote ssh authorized keys file for user: core Sep 5 00:05:31.188349 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:05:31.485742 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 5 00:05:31.485742 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:05:31.489806 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 5 00:05:31.945739 systemd-networkd[781]: eth0: Gained IPv6LL Sep 5 00:05:32.210536 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:05:32.980100 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 5 00:05:32.980100 ignition[954]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:05:32.983709 ignition[954]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:05:32.983709 ignition[954]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:05:32.983709 ignition[954]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:05:32.983709 ignition[954]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:05:32.983709 ignition[954]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:05:32.983709 ignition[954]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:05:32.983709 ignition[954]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:05:32.983709 ignition[954]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:05:33.013518 ignition[954]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:05:33.020808 ignition[954]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:05:33.022573 ignition[954]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:05:33.022573 ignition[954]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:05:33.022573 ignition[954]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:05:33.022573 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:05:33.022573 ignition[954]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:05:33.022573 ignition[954]: INFO : files: files passed Sep 5 00:05:33.022573 ignition[954]: INFO : Ignition finished successfully Sep 5 00:05:33.035157 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:05:33.048719 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:05:33.050859 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:05:33.057872 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:05:33.058080 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:05:33.064296 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:05:33.068946 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:05:33.070949 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:05:33.072636 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:05:33.076639 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:05:33.077165 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:05:33.094755 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:05:33.136445 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:05:33.136792 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:05:33.140532 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:05:33.140657 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:05:33.142684 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:05:33.153735 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:05:33.169286 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:05:33.176753 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:05:33.186354 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:05:33.186536 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:05:33.186882 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:05:33.187187 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:05:33.187316 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:05:33.188171 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:05:33.188523 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:05:33.188844 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:05:33.189159 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:05:33.189500 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:05:33.189820 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:05:33.190180 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:05:33.190487 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:05:33.190834 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:05:33.191179 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:05:33.191454 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:05:33.191599 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:05:33.192161 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:05:33.192495 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:05:33.192774 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:05:33.192889 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:05:33.193287 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:05:33.193397 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:05:33.193921 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:05:33.194061 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:05:33.251274 ignition[1008]: INFO : Ignition 2.19.0 Sep 5 00:05:33.251274 ignition[1008]: INFO : Stage: umount Sep 5 00:05:33.251274 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:05:33.251274 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:05:33.251274 ignition[1008]: INFO : umount: umount passed Sep 5 00:05:33.251274 ignition[1008]: INFO : Ignition finished successfully Sep 5 00:05:33.194516 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:05:33.194890 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:05:33.198553 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:05:33.198947 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:05:33.199265 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:05:33.199811 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:05:33.199940 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:05:33.200261 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:05:33.200349 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:05:33.200820 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:05:33.200963 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:05:33.201275 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:05:33.201377 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:05:33.225798 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:05:33.227048 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:05:33.227208 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:05:33.230794 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:05:33.231798 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:05:33.231973 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:05:33.234560 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:05:33.234714 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:05:33.241680 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:05:33.241844 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:05:33.253402 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:05:33.253601 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:05:33.255569 systemd[1]: Stopped target network.target - Network. Sep 5 00:05:33.257601 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:05:33.257677 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:05:33.259670 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:05:33.259746 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:05:33.261763 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:05:33.261829 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:05:33.263814 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:05:33.263876 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:05:33.266364 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:05:33.269839 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:05:33.272806 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:05:33.273610 systemd-networkd[781]: eth0: DHCPv6 lease lost Sep 5 00:05:33.275521 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:05:33.275680 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:05:33.277981 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:05:33.278059 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:05:33.286742 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:05:33.289021 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:05:33.289121 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:05:33.293159 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:05:33.296118 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:05:33.297549 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:05:33.325332 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:05:33.328840 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:05:33.346087 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:05:33.346182 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:05:33.349257 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:05:33.349316 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:05:33.352569 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:05:33.353579 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:05:33.355976 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:05:33.357019 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:05:33.359304 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:05:33.360460 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:05:33.372639 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:05:33.373806 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:05:33.373866 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:05:33.376229 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:05:33.376291 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:05:33.377269 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:05:33.377338 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:05:33.379336 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:05:33.379397 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:05:33.379874 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:05:33.379932 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:05:33.380753 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:05:33.380922 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:05:33.385991 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:05:33.386124 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:05:33.462645 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:05:33.463907 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:05:33.466621 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:05:33.469183 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:05:33.470377 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:05:33.487664 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:05:33.494904 systemd[1]: Switching root. Sep 5 00:05:33.532731 systemd-journald[191]: Journal stopped Sep 5 00:05:34.743807 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Sep 5 00:05:34.743885 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:05:34.743905 kernel: SELinux: policy capability open_perms=1 Sep 5 00:05:34.743916 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:05:34.743927 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:05:34.743938 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:05:34.743950 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:05:34.743961 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:05:34.743972 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:05:34.743985 kernel: audit: type=1403 audit(1757030733.997:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:05:34.743998 systemd[1]: Successfully loaded SELinux policy in 39.332ms. Sep 5 00:05:34.744024 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.258ms. Sep 5 00:05:34.744037 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:05:34.744049 systemd[1]: Detected virtualization kvm. Sep 5 00:05:34.744061 systemd[1]: Detected architecture x86-64. Sep 5 00:05:34.744073 systemd[1]: Detected first boot. Sep 5 00:05:34.744085 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:05:34.744097 zram_generator::config[1051]: No configuration found. Sep 5 00:05:34.744112 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:05:34.744124 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:05:34.744136 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:05:34.744148 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:05:34.744160 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:05:34.744174 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:05:34.744198 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:05:34.744210 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:05:34.744225 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:05:34.744238 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:05:34.744250 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:05:34.744261 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:05:34.744273 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:05:34.744285 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:05:34.744298 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:05:34.744309 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:05:34.744321 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:05:34.744336 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:05:34.744348 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:05:34.744360 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:05:34.744372 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:05:34.744384 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:05:34.744396 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:05:34.744408 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:05:34.744422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:05:34.744434 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:05:34.744449 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:05:34.744460 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:05:34.744494 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:05:34.744506 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:05:34.744518 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:05:34.744530 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:05:34.744542 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:05:34.744553 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:05:34.744568 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:05:34.744580 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:05:34.744592 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:05:34.744603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:34.744615 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:05:34.744627 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:05:34.744638 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:05:34.744651 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:05:34.744665 systemd[1]: Reached target machines.target - Containers. Sep 5 00:05:34.744677 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:05:34.744689 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:05:34.744702 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:05:34.744714 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:05:34.744727 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:05:34.744738 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:05:34.744755 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:05:34.744767 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:05:34.744781 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:05:34.744793 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:05:34.744805 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:05:34.744817 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:05:34.744828 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:05:34.744840 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:05:34.744852 kernel: fuse: init (API version 7.39) Sep 5 00:05:34.744863 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:05:34.744875 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:05:34.744890 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:05:34.744902 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:05:34.744914 kernel: ACPI: bus type drm_connector registered Sep 5 00:05:34.744925 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:05:34.744937 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:05:34.744949 systemd[1]: Stopped verity-setup.service. Sep 5 00:05:34.744961 kernel: loop: module loaded Sep 5 00:05:34.744972 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:34.744989 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:05:34.745008 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:05:34.745020 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:05:34.745048 systemd-journald[1121]: Collecting audit messages is disabled. Sep 5 00:05:34.745070 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:05:34.745083 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:05:34.745097 systemd-journald[1121]: Journal started Sep 5 00:05:34.745119 systemd-journald[1121]: Runtime Journal (/run/log/journal/9c9aaa5a7d4345898955333d32cdb23e) is 6.0M, max 48.4M, 42.3M free. Sep 5 00:05:34.507449 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:05:34.524795 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:05:34.525254 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:05:34.747521 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:05:34.749166 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:05:34.750387 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:05:34.751785 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:05:34.753278 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:05:34.753463 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:05:34.754998 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:05:34.755166 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:05:34.756576 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:05:34.756749 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:05:34.758095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:05:34.758310 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:05:34.759879 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:05:34.760075 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:05:34.761492 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:05:34.761684 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:05:34.763067 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:05:34.764446 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:05:34.765986 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:05:34.780002 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:05:34.788568 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:05:34.790863 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:05:34.792087 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:05:34.792120 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:05:34.794143 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:05:34.796499 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:05:34.799638 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:05:34.800804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:05:34.803640 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:05:34.808769 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:05:34.810380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:05:34.816513 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:05:34.817679 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:05:34.818794 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:05:34.824603 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:05:34.834870 systemd-journald[1121]: Time spent on flushing to /var/log/journal/9c9aaa5a7d4345898955333d32cdb23e is 30.339ms for 949 entries. Sep 5 00:05:34.834870 systemd-journald[1121]: System Journal (/var/log/journal/9c9aaa5a7d4345898955333d32cdb23e) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:05:34.873492 systemd-journald[1121]: Received client request to flush runtime journal. Sep 5 00:05:34.873538 kernel: loop0: detected capacity change from 0 to 140768 Sep 5 00:05:34.873553 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:05:34.830292 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:05:34.836671 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:05:34.838428 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:05:34.840136 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:05:34.842612 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:05:34.844283 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:05:34.849687 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:05:34.860652 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:05:34.866573 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:05:34.868247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:05:34.875831 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:05:34.890313 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 00:05:34.892642 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:05:34.899668 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:05:34.901649 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:05:34.902286 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:05:34.911642 kernel: loop1: detected capacity change from 0 to 142488 Sep 5 00:05:34.934235 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Sep 5 00:05:34.934256 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Sep 5 00:05:34.941353 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:05:34.945601 kernel: loop2: detected capacity change from 0 to 224512 Sep 5 00:05:34.972602 kernel: loop3: detected capacity change from 0 to 140768 Sep 5 00:05:34.985589 kernel: loop4: detected capacity change from 0 to 142488 Sep 5 00:05:34.996491 kernel: loop5: detected capacity change from 0 to 224512 Sep 5 00:05:35.002343 (sd-merge)[1189]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:05:35.002909 (sd-merge)[1189]: Merged extensions into '/usr'. Sep 5 00:05:35.008989 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:05:35.009007 systemd[1]: Reloading... Sep 5 00:05:35.063538 zram_generator::config[1214]: No configuration found. Sep 5 00:05:35.145662 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:05:35.201763 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:05:35.251095 systemd[1]: Reloading finished in 241 ms. Sep 5 00:05:35.286301 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:05:35.287877 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:05:35.304649 systemd[1]: Starting ensure-sysext.service... Sep 5 00:05:35.306651 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:05:35.313928 systemd[1]: Reloading requested from client PID 1252 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:05:35.313943 systemd[1]: Reloading... Sep 5 00:05:35.332077 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:05:35.332462 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:05:35.333467 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:05:35.333793 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Sep 5 00:05:35.333869 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Sep 5 00:05:35.338324 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:05:35.338337 systemd-tmpfiles[1254]: Skipping /boot Sep 5 00:05:35.355573 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:05:35.355585 systemd-tmpfiles[1254]: Skipping /boot Sep 5 00:05:35.367663 zram_generator::config[1278]: No configuration found. Sep 5 00:05:35.486753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:05:35.536316 systemd[1]: Reloading finished in 222 ms. Sep 5 00:05:35.555349 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:05:35.569374 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:05:35.576565 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:05:35.579117 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:05:35.581492 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:05:35.584651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:05:35.588448 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:05:35.591632 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:05:35.597424 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:35.597716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:05:35.606736 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:05:35.613458 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:05:35.616703 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:05:35.618311 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:05:35.624760 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:05:35.626091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:35.628540 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:05:35.630315 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:05:35.630551 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:05:35.633291 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Sep 5 00:05:35.637918 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:05:35.638099 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:05:35.640879 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:05:35.641193 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:05:35.645082 augenrules[1345]: No rules Sep 5 00:05:35.646001 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:05:35.654548 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:05:35.657770 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:35.659292 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:05:35.667968 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:05:35.672788 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:05:35.675821 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:05:35.677637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:05:35.682137 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:05:35.685241 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:05:35.687546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:35.689800 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:05:35.697850 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:05:35.698100 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:05:35.700105 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:05:35.710889 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:05:35.712505 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1377) Sep 5 00:05:35.714591 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:05:35.714808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:05:35.720859 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:05:35.721080 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:05:35.732142 systemd[1]: Finished ensure-sysext.service. Sep 5 00:05:35.733179 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:05:35.739125 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:05:35.750253 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:35.750409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:05:35.755640 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:05:35.761660 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:05:35.763060 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:05:35.763111 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:05:35.768811 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:05:35.770227 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:05:35.770264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:35.770858 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:05:35.771053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:05:35.774169 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:05:35.774353 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:05:35.778810 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:05:35.782143 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:05:35.789758 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:05:35.828321 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 5 00:05:35.829572 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:05:35.832607 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:05:35.834073 systemd-networkd[1373]: lo: Link UP Sep 5 00:05:35.834087 systemd-networkd[1373]: lo: Gained carrier Sep 5 00:05:35.836591 systemd-networkd[1373]: Enumeration completed Sep 5 00:05:35.836701 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:05:35.838947 systemd-resolved[1324]: Positive Trust Anchors: Sep 5 00:05:35.839281 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:05:35.839353 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:05:35.840454 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:05:35.840467 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:05:35.841432 systemd-networkd[1373]: eth0: Link UP Sep 5 00:05:35.841442 systemd-networkd[1373]: eth0: Gained carrier Sep 5 00:05:35.841457 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:05:35.846775 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:05:35.851178 systemd-resolved[1324]: Defaulting to hostname 'linux'. Sep 5 00:05:35.854676 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:05:35.856238 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:05:35.857765 systemd[1]: Reached target network.target - Network. Sep 5 00:05:35.857857 systemd-networkd[1373]: eth0: DHCPv4 address 10.0.0.21/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:05:35.858745 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Sep 5 00:05:35.858910 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:05:36.380065 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:05:36.380120 systemd-timesyncd[1401]: Initial clock synchronization to Fri 2025-09-05 00:05:36.379944 UTC. Sep 5 00:05:36.380256 systemd-resolved[1324]: Clock change detected. Flushing caches. Sep 5 00:05:36.381090 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:05:36.391878 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 5 00:05:36.437530 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:05:36.438769 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 5 00:05:36.438993 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:05:36.497869 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:05:36.504604 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:05:36.511014 kernel: kvm_amd: TSC scaling supported Sep 5 00:05:36.511096 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:05:36.511110 kernel: kvm_amd: Nested Paging enabled Sep 5 00:05:36.511957 kernel: kvm_amd: LBR virtualization supported Sep 5 00:05:36.511977 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:05:36.512942 kernel: kvm_amd: Virtual GIF supported Sep 5 00:05:36.534860 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:05:36.570561 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:05:36.613004 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:05:36.614878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:05:36.623727 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:05:36.804293 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:05:36.805990 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:05:36.807146 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:05:36.808320 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:05:36.809588 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:05:36.811163 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:05:36.812469 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:05:36.813938 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:05:36.815153 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:05:36.815186 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:05:36.816114 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:05:36.818052 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:05:36.821186 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:05:36.831705 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:05:36.834401 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:05:36.836078 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:05:36.837212 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:05:36.838160 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:05:36.839112 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:05:36.839147 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:05:36.840423 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:05:36.842483 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:05:36.844906 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:05:36.846979 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:05:36.855887 jq[1430]: false Sep 5 00:05:36.856014 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:05:36.857135 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:05:36.858798 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:05:36.862972 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:05:36.865282 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:05:36.869082 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:05:36.869445 dbus-daemon[1429]: [system] SELinux support is enabled Sep 5 00:05:36.878997 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:05:36.880848 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:05:36.881338 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:05:36.882385 extend-filesystems[1431]: Found loop3 Sep 5 00:05:36.883394 extend-filesystems[1431]: Found loop4 Sep 5 00:05:36.883394 extend-filesystems[1431]: Found loop5 Sep 5 00:05:36.883394 extend-filesystems[1431]: Found sr0 Sep 5 00:05:36.883394 extend-filesystems[1431]: Found vda Sep 5 00:05:36.883394 extend-filesystems[1431]: Found vda1 Sep 5 00:05:36.883394 extend-filesystems[1431]: Found vda2 Sep 5 00:05:36.883394 extend-filesystems[1431]: Found vda3 Sep 5 00:05:36.883394 extend-filesystems[1431]: Found usr Sep 5 00:05:36.883394 extend-filesystems[1431]: Found vda4 Sep 5 00:05:36.883394 extend-filesystems[1431]: Found vda6 Sep 5 00:05:36.883394 extend-filesystems[1431]: Found vda7 Sep 5 00:05:36.883394 extend-filesystems[1431]: Found vda9 Sep 5 00:05:36.883394 extend-filesystems[1431]: Checking size of /dev/vda9 Sep 5 00:05:36.885633 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:05:36.917092 update_engine[1443]: I20250905 00:05:36.905949 1443 main.cc:92] Flatcar Update Engine starting Sep 5 00:05:36.917092 update_engine[1443]: I20250905 00:05:36.907238 1443 update_check_scheduler.cc:74] Next update check in 8m6s Sep 5 00:05:36.888614 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:05:36.917461 jq[1448]: true Sep 5 00:05:36.892556 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:05:36.898317 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:05:36.901686 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:05:36.902741 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:05:36.903238 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:05:36.903464 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:05:36.907218 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:05:36.907415 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:05:36.931652 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:05:36.932788 jq[1452]: true Sep 5 00:05:36.931703 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:05:36.933069 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:05:36.933097 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:05:36.933234 extend-filesystems[1431]: Resized partition /dev/vda9 Sep 5 00:05:36.936210 tar[1451]: linux-amd64/LICENSE Sep 5 00:05:36.936210 tar[1451]: linux-amd64/helm Sep 5 00:05:36.942314 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:05:36.949246 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1359) Sep 5 00:05:36.949280 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:05:36.946860 (ntainerd)[1460]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:05:36.953301 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:05:36.990988 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:05:37.073549 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Sep 5 00:05:37.073574 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:05:37.077017 systemd-logind[1438]: New seat seat0. Sep 5 00:05:37.111841 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:05:37.116052 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:05:37.124328 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:05:37.142191 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:05:37.142191 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:05:37.142191 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:05:37.145145 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Sep 5 00:05:37.143389 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:05:37.143740 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:05:37.154854 bash[1482]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:05:37.158869 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:05:37.161390 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:05:37.388055 sshd_keygen[1446]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:05:37.387952 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:05:37.432449 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:05:37.442778 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:05:37.443138 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:05:37.455232 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:05:37.479558 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:05:37.541396 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:05:37.544271 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:05:37.545546 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:05:37.603594 containerd[1460]: time="2025-09-05T00:05:37.603457482Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 00:05:37.638660 containerd[1460]: time="2025-09-05T00:05:37.638521528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:37.643565 containerd[1460]: time="2025-09-05T00:05:37.643517455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.643644443Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.643671043Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.643916724Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.643941260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.644031469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.644044673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.644271489Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.644286627Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.644305502Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.644322184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.644857007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:37.645790 containerd[1460]: time="2025-09-05T00:05:37.645153192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:37.646080 containerd[1460]: time="2025-09-05T00:05:37.645354950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:37.646080 containerd[1460]: time="2025-09-05T00:05:37.645372052Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:05:37.646080 containerd[1460]: time="2025-09-05T00:05:37.645507516Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:05:37.646080 containerd[1460]: time="2025-09-05T00:05:37.645589510Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:05:37.653444 containerd[1460]: time="2025-09-05T00:05:37.653392871Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:05:37.653499 containerd[1460]: time="2025-09-05T00:05:37.653467591Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:05:37.653535 containerd[1460]: time="2025-09-05T00:05:37.653496706Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:05:37.653779 containerd[1460]: time="2025-09-05T00:05:37.653743268Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:05:37.653779 containerd[1460]: time="2025-09-05T00:05:37.653770660Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:05:37.654029 containerd[1460]: time="2025-09-05T00:05:37.653996563Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:05:37.657448 containerd[1460]: time="2025-09-05T00:05:37.657342306Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:05:37.657689 containerd[1460]: time="2025-09-05T00:05:37.657655714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:05:37.657813 containerd[1460]: time="2025-09-05T00:05:37.657754119Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:05:37.657883 containerd[1460]: time="2025-09-05T00:05:37.657844047Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:05:37.657939 containerd[1460]: time="2025-09-05T00:05:37.657908658Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:05:37.657980 containerd[1460]: time="2025-09-05T00:05:37.657944886Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:05:37.658001 containerd[1460]: time="2025-09-05T00:05:37.657985212Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:05:37.658021 containerd[1460]: time="2025-09-05T00:05:37.658010179Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:05:37.658066 containerd[1460]: time="2025-09-05T00:05:37.658033823Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:05:37.658125 containerd[1460]: time="2025-09-05T00:05:37.658082154Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:05:37.658167 containerd[1460]: time="2025-09-05T00:05:37.658132729Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:05:37.658188 containerd[1460]: time="2025-09-05T00:05:37.658169437Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:05:37.658253 containerd[1460]: time="2025-09-05T00:05:37.658233858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658276 containerd[1460]: time="2025-09-05T00:05:37.658259296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658303 containerd[1460]: time="2025-09-05T00:05:37.658278091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658303 containerd[1460]: time="2025-09-05T00:05:37.658297718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658407 containerd[1460]: time="2025-09-05T00:05:37.658334126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658407 containerd[1460]: time="2025-09-05T00:05:37.658367348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658407 containerd[1460]: time="2025-09-05T00:05:37.658404909Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658479 containerd[1460]: time="2025-09-05T00:05:37.658425477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658479 containerd[1460]: time="2025-09-05T00:05:37.658443601Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658523 containerd[1460]: time="2025-09-05T00:05:37.658466274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658523 containerd[1460]: time="2025-09-05T00:05:37.658517310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658570 containerd[1460]: time="2025-09-05T00:05:37.658540894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658590 containerd[1460]: time="2025-09-05T00:05:37.658569157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658633 containerd[1460]: time="2025-09-05T00:05:37.658597590Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:05:37.658680 containerd[1460]: time="2025-09-05T00:05:37.658654196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658704 containerd[1460]: time="2025-09-05T00:05:37.658686086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.658730 containerd[1460]: time="2025-09-05T00:05:37.658702827Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:05:37.659455 containerd[1460]: time="2025-09-05T00:05:37.659013420Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:05:37.659455 containerd[1460]: time="2025-09-05T00:05:37.659051010Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:05:37.659455 containerd[1460]: time="2025-09-05T00:05:37.659068693Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:05:37.659455 containerd[1460]: time="2025-09-05T00:05:37.659084233Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:05:37.659455 containerd[1460]: time="2025-09-05T00:05:37.659100683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.659455 containerd[1460]: time="2025-09-05T00:05:37.659130820Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:05:37.659455 containerd[1460]: time="2025-09-05T00:05:37.659164122Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:05:37.659455 containerd[1460]: time="2025-09-05T00:05:37.659177457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:05:37.659719 containerd[1460]: time="2025-09-05T00:05:37.659622552Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:05:37.659719 containerd[1460]: time="2025-09-05T00:05:37.659677936Z" level=info msg="Connect containerd service" Sep 5 00:05:37.660034 containerd[1460]: time="2025-09-05T00:05:37.659771601Z" level=info msg="using legacy CRI server" Sep 5 00:05:37.660034 containerd[1460]: time="2025-09-05T00:05:37.659787641Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:05:37.660034 containerd[1460]: time="2025-09-05T00:05:37.660015930Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:05:37.661134 containerd[1460]: time="2025-09-05T00:05:37.661093951Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:05:37.661512 containerd[1460]: time="2025-09-05T00:05:37.661281623Z" level=info msg="Start subscribing containerd event" Sep 5 00:05:37.661512 containerd[1460]: time="2025-09-05T00:05:37.661390797Z" level=info msg="Start recovering state" Sep 5 00:05:37.661594 containerd[1460]: time="2025-09-05T00:05:37.661571897Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:05:37.661739 containerd[1460]: time="2025-09-05T00:05:37.661640696Z" level=info msg="Start event monitor" Sep 5 00:05:37.661817 containerd[1460]: time="2025-09-05T00:05:37.661804333Z" level=info msg="Start snapshots syncer" Sep 5 00:05:37.661953 containerd[1460]: time="2025-09-05T00:05:37.661888871Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:05:37.661953 containerd[1460]: time="2025-09-05T00:05:37.661918837Z" level=info msg="Start streaming server" Sep 5 00:05:37.662529 containerd[1460]: time="2025-09-05T00:05:37.661668829Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:05:37.662529 containerd[1460]: time="2025-09-05T00:05:37.662505918Z" level=info msg="containerd successfully booted in 0.060802s" Sep 5 00:05:37.662610 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:05:37.867060 tar[1451]: linux-amd64/README.md Sep 5 00:05:37.888960 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:05:38.034068 systemd-networkd[1373]: eth0: Gained IPv6LL Sep 5 00:05:38.038162 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:05:38.040281 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:05:38.048031 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:05:38.050530 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:38.052805 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:05:38.076310 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:05:38.078201 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:05:38.078411 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:05:38.081008 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:05:39.324574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:39.326682 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:05:39.328185 systemd[1]: Startup finished in 780ms (kernel) + 6.283s (initrd) + 4.848s (userspace) = 11.912s. Sep 5 00:05:39.340604 (kubelet)[1541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:05:39.894796 kubelet[1541]: E0905 00:05:39.894655 1541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:05:39.898946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:05:39.899144 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:05:39.899515 systemd[1]: kubelet.service: Consumed 1.694s CPU time. Sep 5 00:05:40.870994 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:05:40.872759 systemd[1]: Started sshd@0-10.0.0.21:22-10.0.0.1:46618.service - OpenSSH per-connection server daemon (10.0.0.1:46618). Sep 5 00:05:40.936108 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 46618 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:05:40.940227 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:40.950202 systemd-logind[1438]: New session 1 of user core. Sep 5 00:05:40.951542 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:05:40.960056 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:05:40.990171 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:05:41.002133 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:05:41.005209 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:05:41.124709 systemd[1558]: Queued start job for default target default.target. Sep 5 00:05:41.131728 systemd[1558]: Created slice app.slice - User Application Slice. Sep 5 00:05:41.131764 systemd[1558]: Reached target paths.target - Paths. Sep 5 00:05:41.131782 systemd[1558]: Reached target timers.target - Timers. Sep 5 00:05:41.133747 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:05:41.148111 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:05:41.148269 systemd[1558]: Reached target sockets.target - Sockets. Sep 5 00:05:41.148289 systemd[1558]: Reached target basic.target - Basic System. Sep 5 00:05:41.148328 systemd[1558]: Reached target default.target - Main User Target. Sep 5 00:05:41.148364 systemd[1558]: Startup finished in 135ms. Sep 5 00:05:41.148691 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:05:41.150241 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:05:41.211332 systemd[1]: Started sshd@1-10.0.0.21:22-10.0.0.1:46630.service - OpenSSH per-connection server daemon (10.0.0.1:46630). Sep 5 00:05:41.250738 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 46630 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:05:41.252294 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:41.256601 systemd-logind[1438]: New session 2 of user core. Sep 5 00:05:41.275087 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:05:41.330595 sshd[1569]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:41.346875 systemd[1]: sshd@1-10.0.0.21:22-10.0.0.1:46630.service: Deactivated successfully. Sep 5 00:05:41.348922 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:05:41.350498 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:05:41.360130 systemd[1]: Started sshd@2-10.0.0.21:22-10.0.0.1:46640.service - OpenSSH per-connection server daemon (10.0.0.1:46640). Sep 5 00:05:41.361388 systemd-logind[1438]: Removed session 2. Sep 5 00:05:41.389015 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 46640 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:05:41.390843 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:41.395480 systemd-logind[1438]: New session 3 of user core. Sep 5 00:05:41.404970 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:05:41.455503 sshd[1576]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:41.466746 systemd[1]: sshd@2-10.0.0.21:22-10.0.0.1:46640.service: Deactivated successfully. Sep 5 00:05:41.468493 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:05:41.470078 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:05:41.483046 systemd[1]: Started sshd@3-10.0.0.21:22-10.0.0.1:46654.service - OpenSSH per-connection server daemon (10.0.0.1:46654). Sep 5 00:05:41.483885 systemd-logind[1438]: Removed session 3. Sep 5 00:05:41.510954 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 46654 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:05:41.512419 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:41.516134 systemd-logind[1438]: New session 4 of user core. Sep 5 00:05:41.526000 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:05:41.580727 sshd[1583]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:41.595084 systemd[1]: sshd@3-10.0.0.21:22-10.0.0.1:46654.service: Deactivated successfully. Sep 5 00:05:41.596800 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:05:41.598537 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:05:41.600079 systemd[1]: Started sshd@4-10.0.0.21:22-10.0.0.1:46670.service - OpenSSH per-connection server daemon (10.0.0.1:46670). Sep 5 00:05:41.600911 systemd-logind[1438]: Removed session 4. Sep 5 00:05:41.632482 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 46670 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:05:41.634057 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:41.638003 systemd-logind[1438]: New session 5 of user core. Sep 5 00:05:41.648977 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:05:41.706734 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:05:41.707086 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:41.726632 sudo[1593]: pam_unix(sudo:session): session closed for user root Sep 5 00:05:41.728326 sshd[1590]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:41.748738 systemd[1]: sshd@4-10.0.0.21:22-10.0.0.1:46670.service: Deactivated successfully. Sep 5 00:05:41.750297 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:05:41.751992 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:05:41.763053 systemd[1]: Started sshd@5-10.0.0.21:22-10.0.0.1:46686.service - OpenSSH per-connection server daemon (10.0.0.1:46686). Sep 5 00:05:41.763891 systemd-logind[1438]: Removed session 5. Sep 5 00:05:41.790863 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 46686 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:05:41.792565 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:41.796384 systemd-logind[1438]: New session 6 of user core. Sep 5 00:05:41.805947 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:05:41.859415 sudo[1602]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:05:41.859740 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:41.863308 sudo[1602]: pam_unix(sudo:session): session closed for user root Sep 5 00:05:41.869599 sudo[1601]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:05:41.870016 sudo[1601]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:41.889043 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:05:41.890773 auditctl[1605]: No rules Sep 5 00:05:41.891239 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:05:41.891518 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:05:41.894288 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:05:41.924949 augenrules[1623]: No rules Sep 5 00:05:41.925934 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:05:41.927324 sudo[1601]: pam_unix(sudo:session): session closed for user root Sep 5 00:05:41.929132 sshd[1598]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:41.942663 systemd[1]: sshd@5-10.0.0.21:22-10.0.0.1:46686.service: Deactivated successfully. Sep 5 00:05:41.944353 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:05:41.945967 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:05:41.957329 systemd[1]: Started sshd@6-10.0.0.21:22-10.0.0.1:46696.service - OpenSSH per-connection server daemon (10.0.0.1:46696). Sep 5 00:05:41.958358 systemd-logind[1438]: Removed session 6. Sep 5 00:05:41.987066 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 46696 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:05:41.988888 sshd[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:41.992866 systemd-logind[1438]: New session 7 of user core. Sep 5 00:05:42.002971 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:05:42.056129 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:05:42.056528 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:42.560084 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:05:42.560179 (dockerd)[1652]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:05:43.462588 dockerd[1652]: time="2025-09-05T00:05:43.462516707Z" level=info msg="Starting up" Sep 5 00:05:44.822637 dockerd[1652]: time="2025-09-05T00:05:44.822518370Z" level=info msg="Loading containers: start." Sep 5 00:05:45.191861 kernel: Initializing XFRM netlink socket Sep 5 00:05:45.271013 systemd-networkd[1373]: docker0: Link UP Sep 5 00:05:45.431268 dockerd[1652]: time="2025-09-05T00:05:45.431218572Z" level=info msg="Loading containers: done." Sep 5 00:05:45.447728 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2521584112-merged.mount: Deactivated successfully. Sep 5 00:05:45.451102 dockerd[1652]: time="2025-09-05T00:05:45.451058142Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:05:45.451430 dockerd[1652]: time="2025-09-05T00:05:45.451160193Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:05:45.451430 dockerd[1652]: time="2025-09-05T00:05:45.451284336Z" level=info msg="Daemon has completed initialization" Sep 5 00:05:45.527451 dockerd[1652]: time="2025-09-05T00:05:45.527307860Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:05:45.527682 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:05:46.517999 containerd[1460]: time="2025-09-05T00:05:46.517931719Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Sep 5 00:05:48.247999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782901980.mount: Deactivated successfully. Sep 5 00:05:49.178820 containerd[1460]: time="2025-09-05T00:05:49.178753269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:49.179534 containerd[1460]: time="2025-09-05T00:05:49.179482466Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=28800687" Sep 5 00:05:49.180639 containerd[1460]: time="2025-09-05T00:05:49.180605101Z" level=info msg="ImageCreate event name:\"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:49.183447 containerd[1460]: time="2025-09-05T00:05:49.183403528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:49.184675 containerd[1460]: time="2025-09-05T00:05:49.184634486Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"28797487\" in 2.666633086s" Sep 5 00:05:49.184727 containerd[1460]: time="2025-09-05T00:05:49.184688728Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:0d4edaa48e2f940c934e0f7cfd5209fc85e65ab5e842b980f41263d1764661f1\"" Sep 5 00:05:49.185384 containerd[1460]: time="2025-09-05T00:05:49.185357051Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Sep 5 00:05:50.149448 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:05:50.158088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:50.407979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:50.415572 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:05:50.501911 kubelet[1868]: E0905 00:05:50.501847 1868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:05:50.508411 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:05:50.508614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:05:51.319113 containerd[1460]: time="2025-09-05T00:05:51.319037882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:51.350415 containerd[1460]: time="2025-09-05T00:05:51.350299407Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=24784128" Sep 5 00:05:51.367847 containerd[1460]: time="2025-09-05T00:05:51.367766689Z" level=info msg="ImageCreate event name:\"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:51.394875 containerd[1460]: time="2025-09-05T00:05:51.394769629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:51.396561 containerd[1460]: time="2025-09-05T00:05:51.396485296Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"26387322\" in 2.211089452s" Sep 5 00:05:51.396621 containerd[1460]: time="2025-09-05T00:05:51.396550979Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:b248d0b0c74ad8230e0bae0cbed477560e8a1e8c7ef5f29b7e75c1f273c8a091\"" Sep 5 00:05:51.397207 containerd[1460]: time="2025-09-05T00:05:51.397182183Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Sep 5 00:05:53.509819 containerd[1460]: time="2025-09-05T00:05:53.509760777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:53.510639 containerd[1460]: time="2025-09-05T00:05:53.510585062Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=19175036" Sep 5 00:05:53.511720 containerd[1460]: time="2025-09-05T00:05:53.511683242Z" level=info msg="ImageCreate event name:\"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:53.515274 containerd[1460]: time="2025-09-05T00:05:53.515237205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:53.516209 containerd[1460]: time="2025-09-05T00:05:53.516176897Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"20778248\" in 2.118960761s" Sep 5 00:05:53.516209 containerd[1460]: time="2025-09-05T00:05:53.516206964Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:2ac266f06c9a5a3d0d20ae482dbccb54d3be454d5ca49f48b528bdf5bae3e908\"" Sep 5 00:05:53.516651 containerd[1460]: time="2025-09-05T00:05:53.516633924Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Sep 5 00:05:55.043977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3051287161.mount: Deactivated successfully. Sep 5 00:05:55.878490 containerd[1460]: time="2025-09-05T00:05:55.878408290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:55.879138 containerd[1460]: time="2025-09-05T00:05:55.879091391Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=30897170" Sep 5 00:05:55.880419 containerd[1460]: time="2025-09-05T00:05:55.880352636Z" level=info msg="ImageCreate event name:\"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:55.882661 containerd[1460]: time="2025-09-05T00:05:55.882629465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:55.883450 containerd[1460]: time="2025-09-05T00:05:55.883399819Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"30896189\" in 2.366737553s" Sep 5 00:05:55.883493 containerd[1460]: time="2025-09-05T00:05:55.883454773Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:d7b94972d43c5d6ce8088a8bcd08614a5ecf2bf04166232c688adcd0b8ed4b12\"" Sep 5 00:05:55.884219 containerd[1460]: time="2025-09-05T00:05:55.884191754Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 00:05:56.544461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount466746397.mount: Deactivated successfully. Sep 5 00:05:58.061550 containerd[1460]: time="2025-09-05T00:05:58.061477314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:58.140236 containerd[1460]: time="2025-09-05T00:05:58.140109209Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 5 00:05:58.239102 containerd[1460]: time="2025-09-05T00:05:58.239024595Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:58.339485 containerd[1460]: time="2025-09-05T00:05:58.339324117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:58.340810 containerd[1460]: time="2025-09-05T00:05:58.340779165Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.456556432s" Sep 5 00:05:58.340873 containerd[1460]: time="2025-09-05T00:05:58.340816675Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 5 00:05:58.341372 containerd[1460]: time="2025-09-05T00:05:58.341352230Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:05:59.114462 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount805235527.mount: Deactivated successfully. Sep 5 00:05:59.122197 containerd[1460]: time="2025-09-05T00:05:59.122160966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:59.123091 containerd[1460]: time="2025-09-05T00:05:59.123055393Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:05:59.124222 containerd[1460]: time="2025-09-05T00:05:59.124199698Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:59.126688 containerd[1460]: time="2025-09-05T00:05:59.126591784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:59.127312 containerd[1460]: time="2025-09-05T00:05:59.127275095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 785.891085ms" Sep 5 00:05:59.127379 containerd[1460]: time="2025-09-05T00:05:59.127314629Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:05:59.128019 containerd[1460]: time="2025-09-05T00:05:59.127984956Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 5 00:05:59.618446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount587399520.mount: Deactivated successfully. Sep 5 00:06:00.759102 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:06:00.772084 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:06:01.145777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:06:01.150541 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:06:01.477541 kubelet[1965]: E0905 00:06:01.477333 1965 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:06:01.482151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:06:01.482371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:06:05.107582 containerd[1460]: time="2025-09-05T00:06:05.107505696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:05.110335 containerd[1460]: time="2025-09-05T00:06:05.110289746Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 5 00:06:05.113223 containerd[1460]: time="2025-09-05T00:06:05.113184283Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:05.119812 containerd[1460]: time="2025-09-05T00:06:05.119734915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:05.121016 containerd[1460]: time="2025-09-05T00:06:05.120985701Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.992968895s" Sep 5 00:06:05.121072 containerd[1460]: time="2025-09-05T00:06:05.121018282Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 5 00:06:07.146532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:06:07.157027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:06:07.182321 systemd[1]: Reloading requested from client PID 2045 ('systemctl') (unit session-7.scope)... Sep 5 00:06:07.182336 systemd[1]: Reloading... Sep 5 00:06:07.281925 zram_generator::config[2090]: No configuration found. Sep 5 00:06:07.674370 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:06:07.761575 systemd[1]: Reloading finished in 578 ms. Sep 5 00:06:07.817041 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:06:07.817142 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:06:07.817431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:06:07.820366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:06:07.998446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:06:08.003399 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:06:08.061750 kubelet[2133]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:06:08.061750 kubelet[2133]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:06:08.061750 kubelet[2133]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:06:08.062141 kubelet[2133]: I0905 00:06:08.061806 2133 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:06:08.617443 kubelet[2133]: I0905 00:06:08.616757 2133 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 00:06:08.617443 kubelet[2133]: I0905 00:06:08.616788 2133 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:06:08.617443 kubelet[2133]: I0905 00:06:08.617203 2133 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 00:06:08.637421 kubelet[2133]: E0905 00:06:08.637374 2133 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:06:08.638226 kubelet[2133]: I0905 00:06:08.638194 2133 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:06:08.643383 kubelet[2133]: E0905 00:06:08.643343 2133 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:06:08.643383 kubelet[2133]: I0905 00:06:08.643374 2133 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:06:08.649213 kubelet[2133]: I0905 00:06:08.649177 2133 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:06:08.649514 kubelet[2133]: I0905 00:06:08.649471 2133 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:06:08.649707 kubelet[2133]: I0905 00:06:08.649503 2133 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:06:08.649809 kubelet[2133]: I0905 00:06:08.649712 2133 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:06:08.649809 kubelet[2133]: I0905 00:06:08.649723 2133 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 00:06:08.649898 kubelet[2133]: I0905 00:06:08.649880 2133 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:06:08.653028 kubelet[2133]: I0905 00:06:08.652985 2133 kubelet.go:446] "Attempting to sync node with API server" Sep 5 00:06:08.653028 kubelet[2133]: I0905 00:06:08.653017 2133 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:06:08.653097 kubelet[2133]: I0905 00:06:08.653036 2133 kubelet.go:352] "Adding apiserver pod source" Sep 5 00:06:08.653097 kubelet[2133]: I0905 00:06:08.653047 2133 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:06:08.656738 kubelet[2133]: W0905 00:06:08.656700 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 5 00:06:08.657463 kubelet[2133]: E0905 00:06:08.656858 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:06:08.657463 kubelet[2133]: I0905 00:06:08.656935 2133 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:06:08.657463 kubelet[2133]: W0905 00:06:08.657022 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 5 00:06:08.657463 kubelet[2133]: E0905 00:06:08.657079 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:06:08.657463 kubelet[2133]: I0905 00:06:08.657325 2133 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:06:08.658800 kubelet[2133]: W0905 00:06:08.658768 2133 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:06:08.661269 kubelet[2133]: I0905 00:06:08.661247 2133 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:06:08.661361 kubelet[2133]: I0905 00:06:08.661283 2133 server.go:1287] "Started kubelet" Sep 5 00:06:08.664571 kubelet[2133]: I0905 00:06:08.663351 2133 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:06:08.664571 kubelet[2133]: I0905 00:06:08.664345 2133 server.go:479] "Adding debug handlers to kubelet server" Sep 5 00:06:08.665124 kubelet[2133]: I0905 00:06:08.665072 2133 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:06:08.665473 kubelet[2133]: I0905 00:06:08.665458 2133 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:06:08.666082 kubelet[2133]: I0905 00:06:08.666063 2133 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:06:08.666551 kubelet[2133]: I0905 00:06:08.666518 2133 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:06:08.669417 kubelet[2133]: E0905 00:06:08.668294 2133 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:06:08.669417 kubelet[2133]: I0905 00:06:08.668339 2133 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:06:08.669417 kubelet[2133]: I0905 00:06:08.668413 2133 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:06:08.669417 kubelet[2133]: I0905 00:06:08.668481 2133 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:06:08.669417 kubelet[2133]: E0905 00:06:08.667345 2133 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.21:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.21:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623a2cc97c6cff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:06:08.661261567 +0000 UTC m=+0.653358238,LastTimestamp:2025-09-05 00:06:08.661261567 +0000 UTC m=+0.653358238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:06:08.669417 kubelet[2133]: E0905 00:06:08.668780 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="200ms" Sep 5 00:06:08.669417 kubelet[2133]: W0905 00:06:08.668853 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 5 00:06:08.669691 kubelet[2133]: E0905 00:06:08.669082 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:06:08.669691 kubelet[2133]: I0905 00:06:08.669431 2133 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:06:08.669691 kubelet[2133]: I0905 00:06:08.669492 2133 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:06:08.670535 kubelet[2133]: I0905 00:06:08.670512 2133 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:06:08.672861 kubelet[2133]: E0905 00:06:08.672840 2133 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:06:08.683040 kubelet[2133]: I0905 00:06:08.682982 2133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:06:08.684430 kubelet[2133]: I0905 00:06:08.684409 2133 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:06:08.684505 kubelet[2133]: I0905 00:06:08.684435 2133 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 00:06:08.684505 kubelet[2133]: I0905 00:06:08.684455 2133 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:06:08.684505 kubelet[2133]: I0905 00:06:08.684463 2133 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 00:06:08.684709 kubelet[2133]: E0905 00:06:08.684512 2133 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:06:08.685302 kubelet[2133]: I0905 00:06:08.685275 2133 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:06:08.685302 kubelet[2133]: I0905 00:06:08.685293 2133 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:06:08.685368 kubelet[2133]: I0905 00:06:08.685309 2133 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:06:08.686003 kubelet[2133]: W0905 00:06:08.685959 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 5 00:06:08.686599 kubelet[2133]: E0905 00:06:08.686013 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:06:08.689349 kubelet[2133]: I0905 00:06:08.689332 2133 policy_none.go:49] "None policy: Start" Sep 5 00:06:08.689401 kubelet[2133]: I0905 00:06:08.689355 2133 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:06:08.689401 kubelet[2133]: I0905 00:06:08.689370 2133 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:06:08.694607 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:06:08.714099 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:06:08.717248 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:06:08.728687 kubelet[2133]: I0905 00:06:08.728657 2133 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:06:08.728985 kubelet[2133]: I0905 00:06:08.728966 2133 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:06:08.729058 kubelet[2133]: I0905 00:06:08.728981 2133 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:06:08.729462 kubelet[2133]: I0905 00:06:08.729204 2133 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:06:08.729932 kubelet[2133]: E0905 00:06:08.729911 2133 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:06:08.729982 kubelet[2133]: E0905 00:06:08.729953 2133 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:06:08.791715 systemd[1]: Created slice kubepods-burstable-pod21b9be35a10533b0511bdcbf38fe26bd.slice - libcontainer container kubepods-burstable-pod21b9be35a10533b0511bdcbf38fe26bd.slice. Sep 5 00:06:08.805635 kubelet[2133]: E0905 00:06:08.805607 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:06:08.808297 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Sep 5 00:06:08.810333 kubelet[2133]: E0905 00:06:08.810316 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:06:08.812045 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Sep 5 00:06:08.813485 kubelet[2133]: E0905 00:06:08.813455 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:06:08.830500 kubelet[2133]: I0905 00:06:08.830485 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:06:08.830884 kubelet[2133]: E0905 00:06:08.830850 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 5 00:06:08.869345 kubelet[2133]: E0905 00:06:08.869234 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="400ms" Sep 5 00:06:08.969580 kubelet[2133]: I0905 00:06:08.969543 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21b9be35a10533b0511bdcbf38fe26bd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"21b9be35a10533b0511bdcbf38fe26bd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:06:08.969580 kubelet[2133]: I0905 00:06:08.969582 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21b9be35a10533b0511bdcbf38fe26bd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"21b9be35a10533b0511bdcbf38fe26bd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:06:08.969691 kubelet[2133]: I0905 00:06:08.969608 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:08.969691 kubelet[2133]: I0905 00:06:08.969630 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:08.969691 kubelet[2133]: I0905 00:06:08.969652 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:08.969691 kubelet[2133]: I0905 00:06:08.969684 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21b9be35a10533b0511bdcbf38fe26bd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"21b9be35a10533b0511bdcbf38fe26bd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:06:08.969780 kubelet[2133]: I0905 00:06:08.969722 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:08.969780 kubelet[2133]: I0905 00:06:08.969749 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:08.969846 kubelet[2133]: I0905 00:06:08.969790 2133 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:06:09.032444 kubelet[2133]: I0905 00:06:09.032417 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:06:09.032713 kubelet[2133]: E0905 00:06:09.032659 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 5 00:06:09.106263 kubelet[2133]: E0905 00:06:09.106232 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:09.107067 containerd[1460]: time="2025-09-05T00:06:09.107016162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:21b9be35a10533b0511bdcbf38fe26bd,Namespace:kube-system,Attempt:0,}" Sep 5 00:06:09.111228 kubelet[2133]: E0905 00:06:09.111202 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:09.111699 containerd[1460]: time="2025-09-05T00:06:09.111646909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Sep 5 00:06:09.113883 kubelet[2133]: E0905 00:06:09.113858 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:09.114250 containerd[1460]: time="2025-09-05T00:06:09.114219011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Sep 5 00:06:09.269858 kubelet[2133]: E0905 00:06:09.269722 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="800ms" Sep 5 00:06:09.434664 kubelet[2133]: I0905 00:06:09.434628 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:06:09.435067 kubelet[2133]: E0905 00:06:09.435039 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 5 00:06:09.564319 kubelet[2133]: W0905 00:06:09.564164 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 5 00:06:09.564319 kubelet[2133]: E0905 00:06:09.564235 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:06:09.683766 kubelet[2133]: W0905 00:06:09.683680 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 5 00:06:09.683766 kubelet[2133]: E0905 00:06:09.683755 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:06:09.863136 kubelet[2133]: W0905 00:06:09.862991 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 5 00:06:09.863136 kubelet[2133]: E0905 00:06:09.863072 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:06:09.877023 kubelet[2133]: W0905 00:06:09.876961 2133 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.21:6443: connect: connection refused Sep 5 00:06:09.877070 kubelet[2133]: E0905 00:06:09.877032 2133 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.21:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:06:10.070681 kubelet[2133]: E0905 00:06:10.070603 2133 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.21:6443: connect: connection refused" interval="1.6s" Sep 5 00:06:10.237156 kubelet[2133]: I0905 00:06:10.237037 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:06:10.237527 kubelet[2133]: E0905 00:06:10.237476 2133 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.21:6443/api/v1/nodes\": dial tcp 10.0.0.21:6443: connect: connection refused" node="localhost" Sep 5 00:06:10.261558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2927286556.mount: Deactivated successfully. Sep 5 00:06:10.270106 containerd[1460]: time="2025-09-05T00:06:10.270035811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:06:10.271130 containerd[1460]: time="2025-09-05T00:06:10.271097628Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:06:10.272093 containerd[1460]: time="2025-09-05T00:06:10.272026479Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 5 00:06:10.272943 containerd[1460]: time="2025-09-05T00:06:10.272921617Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:06:10.273761 containerd[1460]: time="2025-09-05T00:06:10.273727152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:06:10.275439 containerd[1460]: time="2025-09-05T00:06:10.275410651Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:06:10.276592 containerd[1460]: time="2025-09-05T00:06:10.276533686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:06:10.279368 containerd[1460]: time="2025-09-05T00:06:10.279330601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:06:10.281069 containerd[1460]: time="2025-09-05T00:06:10.281038238Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.166767386s" Sep 5 00:06:10.281955 containerd[1460]: time="2025-09-05T00:06:10.281920450Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.170198426s" Sep 5 00:06:10.282853 containerd[1460]: time="2025-09-05T00:06:10.282798524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.17569282s" Sep 5 00:06:10.423968 containerd[1460]: time="2025-09-05T00:06:10.423812643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:10.423968 containerd[1460]: time="2025-09-05T00:06:10.423903486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:10.423968 containerd[1460]: time="2025-09-05T00:06:10.423919317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:10.424733 containerd[1460]: time="2025-09-05T00:06:10.424500612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:10.424733 containerd[1460]: time="2025-09-05T00:06:10.424537453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:10.424733 containerd[1460]: time="2025-09-05T00:06:10.424550919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:10.424733 containerd[1460]: time="2025-09-05T00:06:10.424609411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:10.424733 containerd[1460]: time="2025-09-05T00:06:10.424027725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:10.425270 containerd[1460]: time="2025-09-05T00:06:10.424792352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:10.425270 containerd[1460]: time="2025-09-05T00:06:10.424879609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:10.425270 containerd[1460]: time="2025-09-05T00:06:10.424894729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:10.425960 containerd[1460]: time="2025-09-05T00:06:10.424983037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:10.454994 systemd[1]: Started cri-containerd-9b28f1bf23c0be1065ae3527c2634a374abb536adf10c78707dfa6daea51d7f7.scope - libcontainer container 9b28f1bf23c0be1065ae3527c2634a374abb536adf10c78707dfa6daea51d7f7. Sep 5 00:06:10.459542 systemd[1]: Started cri-containerd-52cef6e5b9ed258ef20200940955130dc7cab529ec291cfd7293494ca94f9ee4.scope - libcontainer container 52cef6e5b9ed258ef20200940955130dc7cab529ec291cfd7293494ca94f9ee4. Sep 5 00:06:10.461464 systemd[1]: Started cri-containerd-e3d712bbf17d488ee73ed565a7dd97559ad999ad9526c855c0a499dfab28d60f.scope - libcontainer container e3d712bbf17d488ee73ed565a7dd97559ad999ad9526c855c0a499dfab28d60f. Sep 5 00:06:10.498680 containerd[1460]: time="2025-09-05T00:06:10.498527845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b28f1bf23c0be1065ae3527c2634a374abb536adf10c78707dfa6daea51d7f7\"" Sep 5 00:06:10.501618 kubelet[2133]: E0905 00:06:10.501479 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:10.503051 containerd[1460]: time="2025-09-05T00:06:10.503002879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"52cef6e5b9ed258ef20200940955130dc7cab529ec291cfd7293494ca94f9ee4\"" Sep 5 00:06:10.504285 kubelet[2133]: E0905 00:06:10.504256 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:10.505686 containerd[1460]: time="2025-09-05T00:06:10.505658393Z" level=info msg="CreateContainer within sandbox \"9b28f1bf23c0be1065ae3527c2634a374abb536adf10c78707dfa6daea51d7f7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:06:10.506361 containerd[1460]: time="2025-09-05T00:06:10.506328229Z" level=info msg="CreateContainer within sandbox \"52cef6e5b9ed258ef20200940955130dc7cab529ec291cfd7293494ca94f9ee4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:06:10.511397 containerd[1460]: time="2025-09-05T00:06:10.511350914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:21b9be35a10533b0511bdcbf38fe26bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3d712bbf17d488ee73ed565a7dd97559ad999ad9526c855c0a499dfab28d60f\"" Sep 5 00:06:10.512413 kubelet[2133]: E0905 00:06:10.512269 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:10.513943 containerd[1460]: time="2025-09-05T00:06:10.513915443Z" level=info msg="CreateContainer within sandbox \"e3d712bbf17d488ee73ed565a7dd97559ad999ad9526c855c0a499dfab28d60f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:06:10.534516 containerd[1460]: time="2025-09-05T00:06:10.534466328Z" level=info msg="CreateContainer within sandbox \"9b28f1bf23c0be1065ae3527c2634a374abb536adf10c78707dfa6daea51d7f7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"36b2b9c135b195d4bc7c0843230af91f35c4ed19cc11e5bd52119dbafa966588\"" Sep 5 00:06:10.535089 containerd[1460]: time="2025-09-05T00:06:10.535057742Z" level=info msg="StartContainer for \"36b2b9c135b195d4bc7c0843230af91f35c4ed19cc11e5bd52119dbafa966588\"" Sep 5 00:06:10.536181 containerd[1460]: time="2025-09-05T00:06:10.536150409Z" level=info msg="CreateContainer within sandbox \"52cef6e5b9ed258ef20200940955130dc7cab529ec291cfd7293494ca94f9ee4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fbb81dc898c8fa663e40825dbce560882ba456a5d8d36d45d68f75df2a445a7e\"" Sep 5 00:06:10.536458 containerd[1460]: time="2025-09-05T00:06:10.536431308Z" level=info msg="StartContainer for \"fbb81dc898c8fa663e40825dbce560882ba456a5d8d36d45d68f75df2a445a7e\"" Sep 5 00:06:10.539073 containerd[1460]: time="2025-09-05T00:06:10.539029893Z" level=info msg="CreateContainer within sandbox \"e3d712bbf17d488ee73ed565a7dd97559ad999ad9526c855c0a499dfab28d60f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"decfe574baeb0a7daa80043158b537d6cb59457fd934ded7003772e5578be1dd\"" Sep 5 00:06:10.539537 containerd[1460]: time="2025-09-05T00:06:10.539502780Z" level=info msg="StartContainer for \"decfe574baeb0a7daa80043158b537d6cb59457fd934ded7003772e5578be1dd\"" Sep 5 00:06:10.581010 systemd[1]: Started cri-containerd-36b2b9c135b195d4bc7c0843230af91f35c4ed19cc11e5bd52119dbafa966588.scope - libcontainer container 36b2b9c135b195d4bc7c0843230af91f35c4ed19cc11e5bd52119dbafa966588. Sep 5 00:06:10.582843 systemd[1]: Started cri-containerd-decfe574baeb0a7daa80043158b537d6cb59457fd934ded7003772e5578be1dd.scope - libcontainer container decfe574baeb0a7daa80043158b537d6cb59457fd934ded7003772e5578be1dd. Sep 5 00:06:10.585054 systemd[1]: Started cri-containerd-fbb81dc898c8fa663e40825dbce560882ba456a5d8d36d45d68f75df2a445a7e.scope - libcontainer container fbb81dc898c8fa663e40825dbce560882ba456a5d8d36d45d68f75df2a445a7e. Sep 5 00:06:10.637010 containerd[1460]: time="2025-09-05T00:06:10.636954458Z" level=info msg="StartContainer for \"decfe574baeb0a7daa80043158b537d6cb59457fd934ded7003772e5578be1dd\" returns successfully" Sep 5 00:06:10.637129 containerd[1460]: time="2025-09-05T00:06:10.637024252Z" level=info msg="StartContainer for \"36b2b9c135b195d4bc7c0843230af91f35c4ed19cc11e5bd52119dbafa966588\" returns successfully" Sep 5 00:06:10.642156 containerd[1460]: time="2025-09-05T00:06:10.642107123Z" level=info msg="StartContainer for \"fbb81dc898c8fa663e40825dbce560882ba456a5d8d36d45d68f75df2a445a7e\" returns successfully" Sep 5 00:06:10.680057 kubelet[2133]: E0905 00:06:10.680004 2133 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.21:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:06:10.692736 kubelet[2133]: E0905 00:06:10.692684 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:06:10.693143 kubelet[2133]: E0905 00:06:10.693116 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:10.694398 kubelet[2133]: E0905 00:06:10.694374 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:06:10.694474 kubelet[2133]: E0905 00:06:10.694458 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:10.696377 kubelet[2133]: E0905 00:06:10.696355 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:06:10.696463 kubelet[2133]: E0905 00:06:10.696447 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:11.699657 kubelet[2133]: E0905 00:06:11.699382 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:06:11.699657 kubelet[2133]: E0905 00:06:11.699524 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:11.701225 kubelet[2133]: E0905 00:06:11.700998 2133 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 00:06:11.701225 kubelet[2133]: E0905 00:06:11.701124 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:11.812470 kubelet[2133]: E0905 00:06:11.812399 2133 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:06:11.838841 kubelet[2133]: I0905 00:06:11.838799 2133 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:06:11.857779 kubelet[2133]: E0905 00:06:11.857681 2133 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18623a2cc97c6cff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:06:08.661261567 +0000 UTC m=+0.653358238,LastTimestamp:2025-09-05 00:06:08.661261567 +0000 UTC m=+0.653358238,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:06:12.496857 kubelet[2133]: E0905 00:06:12.496687 2133 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18623a2cca2cb900 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:06:08.67281536 +0000 UTC m=+0.664912031,LastTimestamp:2025-09-05 00:06:08.67281536 +0000 UTC m=+0.664912031,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:06:12.501010 kubelet[2133]: I0905 00:06:12.500969 2133 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:06:12.501010 kubelet[2133]: E0905 00:06:12.500998 2133 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 00:06:12.564202 kubelet[2133]: E0905 00:06:12.563968 2133 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18623a2ccad9f27e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:06:08.684167806 +0000 UTC m=+0.676264477,LastTimestamp:2025-09-05 00:06:08.684167806 +0000 UTC m=+0.676264477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:06:12.655399 kubelet[2133]: I0905 00:06:12.655361 2133 apiserver.go:52] "Watching apiserver" Sep 5 00:06:12.668998 kubelet[2133]: I0905 00:06:12.668957 2133 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:06:12.668998 kubelet[2133]: I0905 00:06:12.668973 2133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:06:12.699858 kubelet[2133]: I0905 00:06:12.699809 2133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:06:12.863970 kubelet[2133]: E0905 00:06:12.863820 2133 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:06:12.864078 kubelet[2133]: E0905 00:06:12.864020 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:12.864455 kubelet[2133]: E0905 00:06:12.864432 2133 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 00:06:12.864499 kubelet[2133]: I0905 00:06:12.864458 2133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:12.870527 kubelet[2133]: I0905 00:06:12.870486 2133 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:06:12.872983 kubelet[2133]: E0905 00:06:12.870909 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:12.875249 kubelet[2133]: E0905 00:06:12.875212 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:13.943810 systemd[1]: Reloading requested from client PID 2412 ('systemctl') (unit session-7.scope)... Sep 5 00:06:13.943838 systemd[1]: Reloading... Sep 5 00:06:14.024874 zram_generator::config[2454]: No configuration found. Sep 5 00:06:14.126522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:06:14.174141 kubelet[2133]: E0905 00:06:14.174104 2133 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:14.220439 systemd[1]: Reloading finished in 276 ms. Sep 5 00:06:14.264321 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:06:14.288493 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:06:14.288864 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:06:14.288928 systemd[1]: kubelet.service: Consumed 1.118s CPU time, 130.3M memory peak, 0B memory swap peak. Sep 5 00:06:14.299226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:06:14.466667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:06:14.472221 (kubelet)[2496]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:06:14.520348 kubelet[2496]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:06:14.520348 kubelet[2496]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 00:06:14.520348 kubelet[2496]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:06:14.520811 kubelet[2496]: I0905 00:06:14.520476 2496 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:06:14.528472 kubelet[2496]: I0905 00:06:14.528421 2496 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 5 00:06:14.528472 kubelet[2496]: I0905 00:06:14.528445 2496 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:06:14.528736 kubelet[2496]: I0905 00:06:14.528710 2496 server.go:954] "Client rotation is on, will bootstrap in background" Sep 5 00:06:14.529924 kubelet[2496]: I0905 00:06:14.529902 2496 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:06:14.533115 kubelet[2496]: I0905 00:06:14.533075 2496 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:06:14.537748 kubelet[2496]: E0905 00:06:14.537671 2496 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:06:14.537923 kubelet[2496]: I0905 00:06:14.537880 2496 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:06:14.544262 kubelet[2496]: I0905 00:06:14.544231 2496 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:06:14.544839 kubelet[2496]: I0905 00:06:14.544791 2496 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:06:14.545040 kubelet[2496]: I0905 00:06:14.544823 2496 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:06:14.545040 kubelet[2496]: I0905 00:06:14.545038 2496 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:06:14.545182 kubelet[2496]: I0905 00:06:14.545047 2496 container_manager_linux.go:304] "Creating device plugin manager" Sep 5 00:06:14.545182 kubelet[2496]: I0905 00:06:14.545094 2496 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:06:14.545239 kubelet[2496]: I0905 00:06:14.545227 2496 kubelet.go:446] "Attempting to sync node with API server" Sep 5 00:06:14.545269 kubelet[2496]: I0905 00:06:14.545247 2496 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:06:14.545269 kubelet[2496]: I0905 00:06:14.545262 2496 kubelet.go:352] "Adding apiserver pod source" Sep 5 00:06:14.545328 kubelet[2496]: I0905 00:06:14.545272 2496 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:06:14.548188 kubelet[2496]: I0905 00:06:14.548101 2496 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:06:14.548461 kubelet[2496]: I0905 00:06:14.548448 2496 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:06:14.549456 kubelet[2496]: I0905 00:06:14.549428 2496 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 00:06:14.549456 kubelet[2496]: I0905 00:06:14.549457 2496 server.go:1287] "Started kubelet" Sep 5 00:06:14.553115 kubelet[2496]: I0905 00:06:14.552326 2496 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:06:14.553115 kubelet[2496]: I0905 00:06:14.552589 2496 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:06:14.553115 kubelet[2496]: I0905 00:06:14.552629 2496 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:06:14.553620 kubelet[2496]: I0905 00:06:14.553407 2496 server.go:479] "Adding debug handlers to kubelet server" Sep 5 00:06:14.554937 kubelet[2496]: I0905 00:06:14.554914 2496 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:06:14.555662 kubelet[2496]: I0905 00:06:14.555638 2496 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:06:14.559078 kubelet[2496]: I0905 00:06:14.559050 2496 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 00:06:14.559990 kubelet[2496]: I0905 00:06:14.559623 2496 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 00:06:14.559990 kubelet[2496]: I0905 00:06:14.559822 2496 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:06:14.561093 kubelet[2496]: E0905 00:06:14.561069 2496 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:06:14.562635 kubelet[2496]: I0905 00:06:14.562606 2496 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:06:14.565182 kubelet[2496]: I0905 00:06:14.565160 2496 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:06:14.565279 kubelet[2496]: I0905 00:06:14.565270 2496 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:06:14.574506 kubelet[2496]: I0905 00:06:14.574477 2496 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:06:14.576886 kubelet[2496]: I0905 00:06:14.576863 2496 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:06:14.576886 kubelet[2496]: I0905 00:06:14.576885 2496 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 5 00:06:14.576957 kubelet[2496]: I0905 00:06:14.576905 2496 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 00:06:14.576957 kubelet[2496]: I0905 00:06:14.576911 2496 kubelet.go:2382] "Starting kubelet main sync loop" Sep 5 00:06:14.577001 kubelet[2496]: E0905 00:06:14.576957 2496 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:06:14.599146 kubelet[2496]: I0905 00:06:14.599120 2496 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 00:06:14.599146 kubelet[2496]: I0905 00:06:14.599133 2496 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 00:06:14.599146 kubelet[2496]: I0905 00:06:14.599149 2496 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:06:14.599305 kubelet[2496]: I0905 00:06:14.599277 2496 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:06:14.599305 kubelet[2496]: I0905 00:06:14.599286 2496 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:06:14.599305 kubelet[2496]: I0905 00:06:14.599303 2496 policy_none.go:49] "None policy: Start" Sep 5 00:06:14.599371 kubelet[2496]: I0905 00:06:14.599312 2496 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 00:06:14.599371 kubelet[2496]: I0905 00:06:14.599321 2496 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:06:14.599421 kubelet[2496]: I0905 00:06:14.599405 2496 state_mem.go:75] "Updated machine memory state" Sep 5 00:06:14.602875 kubelet[2496]: I0905 00:06:14.602861 2496 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:06:14.603430 kubelet[2496]: I0905 00:06:14.603408 2496 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:06:14.603479 kubelet[2496]: I0905 00:06:14.603432 2496 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:06:14.603741 kubelet[2496]: I0905 00:06:14.603720 2496 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:06:14.605388 kubelet[2496]: E0905 00:06:14.604409 2496 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 00:06:14.677737 kubelet[2496]: I0905 00:06:14.677687 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:06:14.677737 kubelet[2496]: I0905 00:06:14.677706 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:14.678003 kubelet[2496]: I0905 00:06:14.677692 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 00:06:14.682286 kubelet[2496]: E0905 00:06:14.682262 2496 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:06:14.682987 kubelet[2496]: E0905 00:06:14.682959 2496 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:14.714186 kubelet[2496]: I0905 00:06:14.714169 2496 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 00:06:14.719802 kubelet[2496]: I0905 00:06:14.719778 2496 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 00:06:14.719897 kubelet[2496]: I0905 00:06:14.719864 2496 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 00:06:14.861645 kubelet[2496]: I0905 00:06:14.861514 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/21b9be35a10533b0511bdcbf38fe26bd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"21b9be35a10533b0511bdcbf38fe26bd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:06:14.861645 kubelet[2496]: I0905 00:06:14.861546 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:14.861645 kubelet[2496]: I0905 00:06:14.861564 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:14.861645 kubelet[2496]: I0905 00:06:14.861582 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:14.861645 kubelet[2496]: I0905 00:06:14.861597 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:14.861925 kubelet[2496]: I0905 00:06:14.861634 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:06:14.861925 kubelet[2496]: I0905 00:06:14.861683 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/21b9be35a10533b0511bdcbf38fe26bd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"21b9be35a10533b0511bdcbf38fe26bd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:06:14.861925 kubelet[2496]: I0905 00:06:14.861704 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/21b9be35a10533b0511bdcbf38fe26bd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"21b9be35a10533b0511bdcbf38fe26bd\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:06:14.861925 kubelet[2496]: I0905 00:06:14.861723 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:06:14.983061 kubelet[2496]: E0905 00:06:14.983011 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:14.983061 kubelet[2496]: E0905 00:06:14.983011 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:14.983190 kubelet[2496]: E0905 00:06:14.983152 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:15.546686 kubelet[2496]: I0905 00:06:15.546656 2496 apiserver.go:52] "Watching apiserver" Sep 5 00:06:15.560181 kubelet[2496]: I0905 00:06:15.560149 2496 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 00:06:15.590064 kubelet[2496]: I0905 00:06:15.589948 2496 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 00:06:15.590320 kubelet[2496]: E0905 00:06:15.589955 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:15.590320 kubelet[2496]: E0905 00:06:15.590138 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:15.596401 kubelet[2496]: E0905 00:06:15.595820 2496 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 00:06:15.596401 kubelet[2496]: E0905 00:06:15.595967 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:15.620005 kubelet[2496]: I0905 00:06:15.619933 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.6199087309999998 podStartE2EDuration="3.619908731s" podCreationTimestamp="2025-09-05 00:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:06:15.610472031 +0000 UTC m=+1.133969605" watchObservedRunningTime="2025-09-05 00:06:15.619908731 +0000 UTC m=+1.143406305" Sep 5 00:06:15.626115 kubelet[2496]: I0905 00:06:15.626068 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.626055539 podStartE2EDuration="1.626055539s" podCreationTimestamp="2025-09-05 00:06:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:06:15.625901034 +0000 UTC m=+1.149398608" watchObservedRunningTime="2025-09-05 00:06:15.626055539 +0000 UTC m=+1.149553113" Sep 5 00:06:15.626259 kubelet[2496]: I0905 00:06:15.626143 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.626139509 podStartE2EDuration="3.626139509s" podCreationTimestamp="2025-09-05 00:06:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:06:15.620013511 +0000 UTC m=+1.143511085" watchObservedRunningTime="2025-09-05 00:06:15.626139509 +0000 UTC m=+1.149637084" Sep 5 00:06:16.590933 kubelet[2496]: E0905 00:06:16.590882 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:16.591325 kubelet[2496]: E0905 00:06:16.591020 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:17.592497 kubelet[2496]: E0905 00:06:17.592461 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:17.592497 kubelet[2496]: E0905 00:06:17.592462 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:19.128199 kubelet[2496]: I0905 00:06:19.128109 2496 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:06:19.128654 containerd[1460]: time="2025-09-05T00:06:19.128596000Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:06:19.129028 kubelet[2496]: I0905 00:06:19.128794 2496 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:06:19.615468 systemd[1]: Created slice kubepods-besteffort-pod49283bc4_d870_4898_8dd3_a745cbcf9348.slice - libcontainer container kubepods-besteffort-pod49283bc4_d870_4898_8dd3_a745cbcf9348.slice. Sep 5 00:06:19.690019 kubelet[2496]: I0905 00:06:19.689971 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49283bc4-d870-4898-8dd3-a745cbcf9348-lib-modules\") pod \"kube-proxy-82kpn\" (UID: \"49283bc4-d870-4898-8dd3-a745cbcf9348\") " pod="kube-system/kube-proxy-82kpn" Sep 5 00:06:19.690019 kubelet[2496]: I0905 00:06:19.690017 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnwsp\" (UniqueName: \"kubernetes.io/projected/49283bc4-d870-4898-8dd3-a745cbcf9348-kube-api-access-bnwsp\") pod \"kube-proxy-82kpn\" (UID: \"49283bc4-d870-4898-8dd3-a745cbcf9348\") " pod="kube-system/kube-proxy-82kpn" Sep 5 00:06:19.690019 kubelet[2496]: I0905 00:06:19.690039 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/49283bc4-d870-4898-8dd3-a745cbcf9348-kube-proxy\") pod \"kube-proxy-82kpn\" (UID: \"49283bc4-d870-4898-8dd3-a745cbcf9348\") " pod="kube-system/kube-proxy-82kpn" Sep 5 00:06:19.690284 kubelet[2496]: I0905 00:06:19.690057 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49283bc4-d870-4898-8dd3-a745cbcf9348-xtables-lock\") pod \"kube-proxy-82kpn\" (UID: \"49283bc4-d870-4898-8dd3-a745cbcf9348\") " pod="kube-system/kube-proxy-82kpn" Sep 5 00:06:19.928903 kubelet[2496]: E0905 00:06:19.928722 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:19.930090 containerd[1460]: time="2025-09-05T00:06:19.929644294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-82kpn,Uid:49283bc4-d870-4898-8dd3-a745cbcf9348,Namespace:kube-system,Attempt:0,}" Sep 5 00:06:19.978426 containerd[1460]: time="2025-09-05T00:06:19.978305835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:19.978426 containerd[1460]: time="2025-09-05T00:06:19.978371009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:19.978426 containerd[1460]: time="2025-09-05T00:06:19.978382170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:19.978632 containerd[1460]: time="2025-09-05T00:06:19.978471219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:20.011999 systemd[1]: Started cri-containerd-87eea519cdc9d600fabf6c1ed3dbfd698ece761cb43bc51a8ce9d371ff6b55fe.scope - libcontainer container 87eea519cdc9d600fabf6c1ed3dbfd698ece761cb43bc51a8ce9d371ff6b55fe. Sep 5 00:06:20.035628 containerd[1460]: time="2025-09-05T00:06:20.035574681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-82kpn,Uid:49283bc4-d870-4898-8dd3-a745cbcf9348,Namespace:kube-system,Attempt:0,} returns sandbox id \"87eea519cdc9d600fabf6c1ed3dbfd698ece761cb43bc51a8ce9d371ff6b55fe\"" Sep 5 00:06:20.036417 kubelet[2496]: E0905 00:06:20.036379 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:20.038515 containerd[1460]: time="2025-09-05T00:06:20.038474865Z" level=info msg="CreateContainer within sandbox \"87eea519cdc9d600fabf6c1ed3dbfd698ece761cb43bc51a8ce9d371ff6b55fe\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:06:20.055854 containerd[1460]: time="2025-09-05T00:06:20.055773684Z" level=info msg="CreateContainer within sandbox \"87eea519cdc9d600fabf6c1ed3dbfd698ece761cb43bc51a8ce9d371ff6b55fe\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"003754fec2bb8125879b8479fbd8df76c3dd3c8b86eaf50cd1c5e5b591c54200\"" Sep 5 00:06:20.056481 containerd[1460]: time="2025-09-05T00:06:20.056451190Z" level=info msg="StartContainer for \"003754fec2bb8125879b8479fbd8df76c3dd3c8b86eaf50cd1c5e5b591c54200\"" Sep 5 00:06:20.085022 systemd[1]: Started cri-containerd-003754fec2bb8125879b8479fbd8df76c3dd3c8b86eaf50cd1c5e5b591c54200.scope - libcontainer container 003754fec2bb8125879b8479fbd8df76c3dd3c8b86eaf50cd1c5e5b591c54200. Sep 5 00:06:20.122789 containerd[1460]: time="2025-09-05T00:06:20.122716044Z" level=info msg="StartContainer for \"003754fec2bb8125879b8479fbd8df76c3dd3c8b86eaf50cd1c5e5b591c54200\" returns successfully" Sep 5 00:06:20.135927 systemd[1]: Created slice kubepods-besteffort-pode1bae4cb_c4db_4a1c_8d14_d1a4a2a18093.slice - libcontainer container kubepods-besteffort-pode1bae4cb_c4db_4a1c_8d14_d1a4a2a18093.slice. Sep 5 00:06:20.195341 kubelet[2496]: I0905 00:06:20.195186 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thsmv\" (UniqueName: \"kubernetes.io/projected/e1bae4cb-c4db-4a1c-8d14-d1a4a2a18093-kube-api-access-thsmv\") pod \"tigera-operator-755d956888-j56cf\" (UID: \"e1bae4cb-c4db-4a1c-8d14-d1a4a2a18093\") " pod="tigera-operator/tigera-operator-755d956888-j56cf" Sep 5 00:06:20.195341 kubelet[2496]: I0905 00:06:20.195228 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e1bae4cb-c4db-4a1c-8d14-d1a4a2a18093-var-lib-calico\") pod \"tigera-operator-755d956888-j56cf\" (UID: \"e1bae4cb-c4db-4a1c-8d14-d1a4a2a18093\") " pod="tigera-operator/tigera-operator-755d956888-j56cf" Sep 5 00:06:20.439851 containerd[1460]: time="2025-09-05T00:06:20.439789299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-j56cf,Uid:e1bae4cb-c4db-4a1c-8d14-d1a4a2a18093,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:06:20.473383 containerd[1460]: time="2025-09-05T00:06:20.472317590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:20.473383 containerd[1460]: time="2025-09-05T00:06:20.472365782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:20.473383 containerd[1460]: time="2025-09-05T00:06:20.472388845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:20.473383 containerd[1460]: time="2025-09-05T00:06:20.472500167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:20.493233 systemd[1]: Started cri-containerd-83107149addc678c502e4e3f315462fe91848738ea69a8287514f72dd74034a1.scope - libcontainer container 83107149addc678c502e4e3f315462fe91848738ea69a8287514f72dd74034a1. Sep 5 00:06:20.535910 containerd[1460]: time="2025-09-05T00:06:20.535862022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-j56cf,Uid:e1bae4cb-c4db-4a1c-8d14-d1a4a2a18093,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"83107149addc678c502e4e3f315462fe91848738ea69a8287514f72dd74034a1\"" Sep 5 00:06:20.538897 containerd[1460]: time="2025-09-05T00:06:20.537942930Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:06:20.599098 kubelet[2496]: E0905 00:06:20.599062 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:21.789739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount577115681.mount: Deactivated successfully. Sep 5 00:06:22.418150 update_engine[1443]: I20250905 00:06:22.418053 1443 update_attempter.cc:509] Updating boot flags... Sep 5 00:06:22.472161 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2799) Sep 5 00:06:22.508859 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2798) Sep 5 00:06:22.546874 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2798) Sep 5 00:06:23.042768 kubelet[2496]: E0905 00:06:23.042680 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:23.055882 kubelet[2496]: I0905 00:06:23.055821 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-82kpn" podStartSLOduration=4.05580528 podStartE2EDuration="4.05580528s" podCreationTimestamp="2025-09-05 00:06:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:06:20.609199979 +0000 UTC m=+6.132697553" watchObservedRunningTime="2025-09-05 00:06:23.05580528 +0000 UTC m=+8.579302854" Sep 5 00:06:23.603174 kubelet[2496]: E0905 00:06:23.603136 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:24.904614 containerd[1460]: time="2025-09-05T00:06:24.904551565Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:24.905518 containerd[1460]: time="2025-09-05T00:06:24.905445607Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 5 00:06:24.906723 containerd[1460]: time="2025-09-05T00:06:24.906692547Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:24.910248 containerd[1460]: time="2025-09-05T00:06:24.910202599Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:24.910869 containerd[1460]: time="2025-09-05T00:06:24.910788429Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 4.372800182s" Sep 5 00:06:24.910869 containerd[1460]: time="2025-09-05T00:06:24.910844445Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 5 00:06:24.913018 containerd[1460]: time="2025-09-05T00:06:24.912980236Z" level=info msg="CreateContainer within sandbox \"83107149addc678c502e4e3f315462fe91848738ea69a8287514f72dd74034a1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:06:24.927176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3188208679.mount: Deactivated successfully. Sep 5 00:06:24.927919 containerd[1460]: time="2025-09-05T00:06:24.927864637Z" level=info msg="CreateContainer within sandbox \"83107149addc678c502e4e3f315462fe91848738ea69a8287514f72dd74034a1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5e9760f93b67832855508a9c6af14be518bafcb5c7e38a20c9b39527283b9c9b\"" Sep 5 00:06:24.928566 containerd[1460]: time="2025-09-05T00:06:24.928379441Z" level=info msg="StartContainer for \"5e9760f93b67832855508a9c6af14be518bafcb5c7e38a20c9b39527283b9c9b\"" Sep 5 00:06:24.963970 systemd[1]: Started cri-containerd-5e9760f93b67832855508a9c6af14be518bafcb5c7e38a20c9b39527283b9c9b.scope - libcontainer container 5e9760f93b67832855508a9c6af14be518bafcb5c7e38a20c9b39527283b9c9b. Sep 5 00:06:24.995091 containerd[1460]: time="2025-09-05T00:06:24.995044950Z" level=info msg="StartContainer for \"5e9760f93b67832855508a9c6af14be518bafcb5c7e38a20c9b39527283b9c9b\" returns successfully" Sep 5 00:06:25.616075 kubelet[2496]: I0905 00:06:25.616019 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-j56cf" podStartSLOduration=1.241617277 podStartE2EDuration="5.616003695s" podCreationTimestamp="2025-09-05 00:06:20 +0000 UTC" firstStartedPulling="2025-09-05 00:06:20.537316181 +0000 UTC m=+6.060813755" lastFinishedPulling="2025-09-05 00:06:24.911702599 +0000 UTC m=+10.435200173" observedRunningTime="2025-09-05 00:06:25.615521463 +0000 UTC m=+11.139019037" watchObservedRunningTime="2025-09-05 00:06:25.616003695 +0000 UTC m=+11.139501269" Sep 5 00:06:26.302751 kubelet[2496]: E0905 00:06:26.302702 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:26.610061 kubelet[2496]: E0905 00:06:26.609532 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:27.457177 kubelet[2496]: E0905 00:06:27.457144 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:27.611324 kubelet[2496]: E0905 00:06:27.611261 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:31.721620 sudo[1634]: pam_unix(sudo:session): session closed for user root Sep 5 00:06:31.723407 sshd[1631]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:31.727161 systemd[1]: sshd@6-10.0.0.21:22-10.0.0.1:46696.service: Deactivated successfully. Sep 5 00:06:31.729032 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:06:31.729207 systemd[1]: session-7.scope: Consumed 4.316s CPU time, 158.5M memory peak, 0B memory swap peak. Sep 5 00:06:31.730076 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:06:31.731293 systemd-logind[1438]: Removed session 7. Sep 5 00:06:37.581517 systemd[1]: Created slice kubepods-besteffort-pod44cc4adb_68af_43c1_8969_7920196195af.slice - libcontainer container kubepods-besteffort-pod44cc4adb_68af_43c1_8969_7920196195af.slice. Sep 5 00:06:37.605230 kubelet[2496]: I0905 00:06:37.605159 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/44cc4adb-68af-43c1-8969-7920196195af-typha-certs\") pod \"calico-typha-6cb599c5d5-2dqsh\" (UID: \"44cc4adb-68af-43c1-8969-7920196195af\") " pod="calico-system/calico-typha-6cb599c5d5-2dqsh" Sep 5 00:06:37.605230 kubelet[2496]: I0905 00:06:37.605220 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rk6r\" (UniqueName: \"kubernetes.io/projected/44cc4adb-68af-43c1-8969-7920196195af-kube-api-access-5rk6r\") pod \"calico-typha-6cb599c5d5-2dqsh\" (UID: \"44cc4adb-68af-43c1-8969-7920196195af\") " pod="calico-system/calico-typha-6cb599c5d5-2dqsh" Sep 5 00:06:37.605811 kubelet[2496]: I0905 00:06:37.605261 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44cc4adb-68af-43c1-8969-7920196195af-tigera-ca-bundle\") pod \"calico-typha-6cb599c5d5-2dqsh\" (UID: \"44cc4adb-68af-43c1-8969-7920196195af\") " pod="calico-system/calico-typha-6cb599c5d5-2dqsh" Sep 5 00:06:37.895233 kubelet[2496]: E0905 00:06:37.895196 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:37.895885 containerd[1460]: time="2025-09-05T00:06:37.895824464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cb599c5d5-2dqsh,Uid:44cc4adb-68af-43c1-8969-7920196195af,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:38.341645 containerd[1460]: time="2025-09-05T00:06:38.341349626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:38.341645 containerd[1460]: time="2025-09-05T00:06:38.341405691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:38.341645 containerd[1460]: time="2025-09-05T00:06:38.341416822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:38.341645 containerd[1460]: time="2025-09-05T00:06:38.341499618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:38.365973 systemd[1]: Started cri-containerd-e6375ceef20a08efc141fd2ad5254fe581b2ba473953d62dc9530225a8a432ca.scope - libcontainer container e6375ceef20a08efc141fd2ad5254fe581b2ba473953d62dc9530225a8a432ca. Sep 5 00:06:38.408717 containerd[1460]: time="2025-09-05T00:06:38.408674719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cb599c5d5-2dqsh,Uid:44cc4adb-68af-43c1-8969-7920196195af,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6375ceef20a08efc141fd2ad5254fe581b2ba473953d62dc9530225a8a432ca\"" Sep 5 00:06:38.412657 kubelet[2496]: E0905 00:06:38.412622 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:38.415623 containerd[1460]: time="2025-09-05T00:06:38.415588582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:06:38.683304 systemd[1]: Created slice kubepods-besteffort-pod2c696602_0511_43f0_a2cb_f209c3afa390.slice - libcontainer container kubepods-besteffort-pod2c696602_0511_43f0_a2cb_f209c3afa390.slice. Sep 5 00:06:38.713510 kubelet[2496]: I0905 00:06:38.711982 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c696602-0511-43f0-a2cb-f209c3afa390-lib-modules\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.713510 kubelet[2496]: I0905 00:06:38.712029 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2c696602-0511-43f0-a2cb-f209c3afa390-flexvol-driver-host\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.713510 kubelet[2496]: I0905 00:06:38.712050 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2c696602-0511-43f0-a2cb-f209c3afa390-node-certs\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.713510 kubelet[2496]: I0905 00:06:38.712063 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2c696602-0511-43f0-a2cb-f209c3afa390-var-lib-calico\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.713510 kubelet[2496]: I0905 00:06:38.712078 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2c696602-0511-43f0-a2cb-f209c3afa390-cni-log-dir\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.714067 kubelet[2496]: I0905 00:06:38.712094 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c696602-0511-43f0-a2cb-f209c3afa390-xtables-lock\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.714067 kubelet[2496]: I0905 00:06:38.712111 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2c696602-0511-43f0-a2cb-f209c3afa390-cni-bin-dir\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.714067 kubelet[2496]: I0905 00:06:38.712125 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt8ht\" (UniqueName: \"kubernetes.io/projected/2c696602-0511-43f0-a2cb-f209c3afa390-kube-api-access-tt8ht\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.714067 kubelet[2496]: I0905 00:06:38.712146 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2c696602-0511-43f0-a2cb-f209c3afa390-var-run-calico\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.714067 kubelet[2496]: I0905 00:06:38.712160 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2c696602-0511-43f0-a2cb-f209c3afa390-cni-net-dir\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.714208 kubelet[2496]: I0905 00:06:38.712190 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2c696602-0511-43f0-a2cb-f209c3afa390-policysync\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.714208 kubelet[2496]: I0905 00:06:38.712204 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c696602-0511-43f0-a2cb-f209c3afa390-tigera-ca-bundle\") pod \"calico-node-v7t2b\" (UID: \"2c696602-0511-43f0-a2cb-f209c3afa390\") " pod="calico-system/calico-node-v7t2b" Sep 5 00:06:38.823967 kubelet[2496]: E0905 00:06:38.823922 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:38.823967 kubelet[2496]: W0905 00:06:38.823947 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:38.824125 kubelet[2496]: E0905 00:06:38.823990 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:38.824261 kubelet[2496]: E0905 00:06:38.824239 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:38.824261 kubelet[2496]: W0905 00:06:38.824250 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:38.824309 kubelet[2496]: E0905 00:06:38.824260 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:38.952163 kubelet[2496]: E0905 00:06:38.951173 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9mdwq" podUID="cb024f3a-da3b-4d8c-8844-6229ed3989be" Sep 5 00:06:39.002105 kubelet[2496]: E0905 00:06:39.002061 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.002105 kubelet[2496]: W0905 00:06:39.002087 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.002105 kubelet[2496]: E0905 00:06:39.002122 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.002439 kubelet[2496]: E0905 00:06:39.002410 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.002439 kubelet[2496]: W0905 00:06:39.002423 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.002439 kubelet[2496]: E0905 00:06:39.002431 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.002675 kubelet[2496]: E0905 00:06:39.002660 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.002675 kubelet[2496]: W0905 00:06:39.002672 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.002733 kubelet[2496]: E0905 00:06:39.002681 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.002973 kubelet[2496]: E0905 00:06:39.002945 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.002973 kubelet[2496]: W0905 00:06:39.002956 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.002973 kubelet[2496]: E0905 00:06:39.002967 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.003190 kubelet[2496]: E0905 00:06:39.003163 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.003190 kubelet[2496]: W0905 00:06:39.003176 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.003190 kubelet[2496]: E0905 00:06:39.003185 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.003385 kubelet[2496]: E0905 00:06:39.003370 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.003385 kubelet[2496]: W0905 00:06:39.003381 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.003445 kubelet[2496]: E0905 00:06:39.003390 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.003618 kubelet[2496]: E0905 00:06:39.003598 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.003618 kubelet[2496]: W0905 00:06:39.003611 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.003618 kubelet[2496]: E0905 00:06:39.003620 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.003858 kubelet[2496]: E0905 00:06:39.003845 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.003858 kubelet[2496]: W0905 00:06:39.003855 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.003912 kubelet[2496]: E0905 00:06:39.003863 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.004146 kubelet[2496]: E0905 00:06:39.004114 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.004190 kubelet[2496]: W0905 00:06:39.004144 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.004190 kubelet[2496]: E0905 00:06:39.004173 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.004432 kubelet[2496]: E0905 00:06:39.004418 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.004432 kubelet[2496]: W0905 00:06:39.004428 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.004494 kubelet[2496]: E0905 00:06:39.004437 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.004668 kubelet[2496]: E0905 00:06:39.004654 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.004668 kubelet[2496]: W0905 00:06:39.004664 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.004789 kubelet[2496]: E0905 00:06:39.004672 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.004909 kubelet[2496]: E0905 00:06:39.004895 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.004909 kubelet[2496]: W0905 00:06:39.004906 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.004967 kubelet[2496]: E0905 00:06:39.004914 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.005122 kubelet[2496]: E0905 00:06:39.005109 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.005122 kubelet[2496]: W0905 00:06:39.005118 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.005170 kubelet[2496]: E0905 00:06:39.005126 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.005311 kubelet[2496]: E0905 00:06:39.005298 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.005311 kubelet[2496]: W0905 00:06:39.005309 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.005355 kubelet[2496]: E0905 00:06:39.005318 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.005501 kubelet[2496]: E0905 00:06:39.005488 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.005501 kubelet[2496]: W0905 00:06:39.005498 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.005560 kubelet[2496]: E0905 00:06:39.005507 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.005705 kubelet[2496]: E0905 00:06:39.005692 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.005705 kubelet[2496]: W0905 00:06:39.005702 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.005748 kubelet[2496]: E0905 00:06:39.005710 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.005922 kubelet[2496]: E0905 00:06:39.005909 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.005922 kubelet[2496]: W0905 00:06:39.005918 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.005974 kubelet[2496]: E0905 00:06:39.005926 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.006117 kubelet[2496]: E0905 00:06:39.006104 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.006117 kubelet[2496]: W0905 00:06:39.006113 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.006160 kubelet[2496]: E0905 00:06:39.006122 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.006302 kubelet[2496]: E0905 00:06:39.006289 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.006302 kubelet[2496]: W0905 00:06:39.006298 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.006350 kubelet[2496]: E0905 00:06:39.006306 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.006491 kubelet[2496]: E0905 00:06:39.006478 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.006491 kubelet[2496]: W0905 00:06:39.006487 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.006535 kubelet[2496]: E0905 00:06:39.006494 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.015958 kubelet[2496]: E0905 00:06:39.015926 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.015958 kubelet[2496]: W0905 00:06:39.015949 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.016045 kubelet[2496]: E0905 00:06:39.015967 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.016045 kubelet[2496]: I0905 00:06:39.016011 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cb024f3a-da3b-4d8c-8844-6229ed3989be-varrun\") pod \"csi-node-driver-9mdwq\" (UID: \"cb024f3a-da3b-4d8c-8844-6229ed3989be\") " pod="calico-system/csi-node-driver-9mdwq" Sep 5 00:06:39.016379 kubelet[2496]: E0905 00:06:39.016354 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.016379 kubelet[2496]: W0905 00:06:39.016367 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.016440 kubelet[2496]: E0905 00:06:39.016383 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.016440 kubelet[2496]: I0905 00:06:39.016405 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cb024f3a-da3b-4d8c-8844-6229ed3989be-registration-dir\") pod \"csi-node-driver-9mdwq\" (UID: \"cb024f3a-da3b-4d8c-8844-6229ed3989be\") " pod="calico-system/csi-node-driver-9mdwq" Sep 5 00:06:39.016644 kubelet[2496]: E0905 00:06:39.016619 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.016644 kubelet[2496]: W0905 00:06:39.016633 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.016692 kubelet[2496]: E0905 00:06:39.016646 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.016692 kubelet[2496]: I0905 00:06:39.016659 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cb024f3a-da3b-4d8c-8844-6229ed3989be-kubelet-dir\") pod \"csi-node-driver-9mdwq\" (UID: \"cb024f3a-da3b-4d8c-8844-6229ed3989be\") " pod="calico-system/csi-node-driver-9mdwq" Sep 5 00:06:39.016894 kubelet[2496]: E0905 00:06:39.016878 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.016894 kubelet[2496]: W0905 00:06:39.016892 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.016957 kubelet[2496]: E0905 00:06:39.016908 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.017104 kubelet[2496]: E0905 00:06:39.017089 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.017131 kubelet[2496]: W0905 00:06:39.017101 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.017131 kubelet[2496]: E0905 00:06:39.017118 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.017340 kubelet[2496]: E0905 00:06:39.017325 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.017340 kubelet[2496]: W0905 00:06:39.017336 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.017408 kubelet[2496]: E0905 00:06:39.017349 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.017567 kubelet[2496]: E0905 00:06:39.017553 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.017567 kubelet[2496]: W0905 00:06:39.017563 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.017624 kubelet[2496]: E0905 00:06:39.017578 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.017772 kubelet[2496]: E0905 00:06:39.017759 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.017772 kubelet[2496]: W0905 00:06:39.017769 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.017812 kubelet[2496]: E0905 00:06:39.017781 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.017812 kubelet[2496]: I0905 00:06:39.017797 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cb024f3a-da3b-4d8c-8844-6229ed3989be-socket-dir\") pod \"csi-node-driver-9mdwq\" (UID: \"cb024f3a-da3b-4d8c-8844-6229ed3989be\") " pod="calico-system/csi-node-driver-9mdwq" Sep 5 00:06:39.018052 kubelet[2496]: E0905 00:06:39.018032 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.018052 kubelet[2496]: W0905 00:06:39.018051 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.018113 kubelet[2496]: E0905 00:06:39.018085 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.018135 kubelet[2496]: I0905 00:06:39.018121 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np7nx\" (UniqueName: \"kubernetes.io/projected/cb024f3a-da3b-4d8c-8844-6229ed3989be-kube-api-access-np7nx\") pod \"csi-node-driver-9mdwq\" (UID: \"cb024f3a-da3b-4d8c-8844-6229ed3989be\") " pod="calico-system/csi-node-driver-9mdwq" Sep 5 00:06:39.018264 kubelet[2496]: E0905 00:06:39.018249 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.018264 kubelet[2496]: W0905 00:06:39.018261 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.018313 kubelet[2496]: E0905 00:06:39.018288 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.018490 kubelet[2496]: E0905 00:06:39.018476 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.018490 kubelet[2496]: W0905 00:06:39.018487 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.018548 kubelet[2496]: E0905 00:06:39.018502 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.018713 kubelet[2496]: E0905 00:06:39.018698 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.018713 kubelet[2496]: W0905 00:06:39.018710 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.018763 kubelet[2496]: E0905 00:06:39.018723 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.018975 kubelet[2496]: E0905 00:06:39.018958 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.018975 kubelet[2496]: W0905 00:06:39.018971 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.019050 kubelet[2496]: E0905 00:06:39.018982 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.019207 kubelet[2496]: E0905 00:06:39.019193 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.019207 kubelet[2496]: W0905 00:06:39.019203 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.019255 kubelet[2496]: E0905 00:06:39.019212 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.019439 kubelet[2496]: E0905 00:06:39.019426 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.019439 kubelet[2496]: W0905 00:06:39.019436 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.019485 kubelet[2496]: E0905 00:06:39.019444 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.119240 kubelet[2496]: E0905 00:06:39.119194 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.119240 kubelet[2496]: W0905 00:06:39.119220 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.119240 kubelet[2496]: E0905 00:06:39.119240 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.119576 kubelet[2496]: E0905 00:06:39.119554 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.119576 kubelet[2496]: W0905 00:06:39.119566 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.119624 kubelet[2496]: E0905 00:06:39.119582 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.119806 kubelet[2496]: E0905 00:06:39.119785 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.119806 kubelet[2496]: W0905 00:06:39.119797 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.119884 kubelet[2496]: E0905 00:06:39.119812 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.120206 kubelet[2496]: E0905 00:06:39.120168 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.120239 kubelet[2496]: W0905 00:06:39.120200 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.120261 kubelet[2496]: E0905 00:06:39.120235 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.120462 kubelet[2496]: E0905 00:06:39.120446 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.120462 kubelet[2496]: W0905 00:06:39.120459 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.120521 kubelet[2496]: E0905 00:06:39.120475 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.120720 kubelet[2496]: E0905 00:06:39.120702 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.120720 kubelet[2496]: W0905 00:06:39.120716 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.120770 kubelet[2496]: E0905 00:06:39.120733 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.121049 kubelet[2496]: E0905 00:06:39.121032 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.121049 kubelet[2496]: W0905 00:06:39.121046 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.121109 kubelet[2496]: E0905 00:06:39.121080 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.121250 kubelet[2496]: E0905 00:06:39.121235 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.121250 kubelet[2496]: W0905 00:06:39.121247 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.121303 kubelet[2496]: E0905 00:06:39.121279 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.121439 kubelet[2496]: E0905 00:06:39.121423 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.121439 kubelet[2496]: W0905 00:06:39.121435 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.121489 kubelet[2496]: E0905 00:06:39.121463 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.121687 kubelet[2496]: E0905 00:06:39.121668 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.121687 kubelet[2496]: W0905 00:06:39.121682 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.121750 kubelet[2496]: E0905 00:06:39.121710 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.121920 kubelet[2496]: E0905 00:06:39.121900 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.121920 kubelet[2496]: W0905 00:06:39.121916 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.122001 kubelet[2496]: E0905 00:06:39.121935 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.122239 kubelet[2496]: E0905 00:06:39.122224 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.122239 kubelet[2496]: W0905 00:06:39.122235 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.122290 kubelet[2496]: E0905 00:06:39.122249 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.122529 kubelet[2496]: E0905 00:06:39.122513 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.122529 kubelet[2496]: W0905 00:06:39.122525 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.122596 kubelet[2496]: E0905 00:06:39.122549 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.122785 kubelet[2496]: E0905 00:06:39.122770 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.122785 kubelet[2496]: W0905 00:06:39.122781 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.122856 kubelet[2496]: E0905 00:06:39.122812 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.123025 kubelet[2496]: E0905 00:06:39.123012 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.123025 kubelet[2496]: W0905 00:06:39.123022 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.123066 kubelet[2496]: E0905 00:06:39.123046 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.123220 kubelet[2496]: E0905 00:06:39.123208 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.123220 kubelet[2496]: W0905 00:06:39.123217 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.123265 kubelet[2496]: E0905 00:06:39.123240 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.123443 kubelet[2496]: E0905 00:06:39.123430 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.123443 kubelet[2496]: W0905 00:06:39.123440 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.123487 kubelet[2496]: E0905 00:06:39.123453 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.123683 kubelet[2496]: E0905 00:06:39.123668 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.123683 kubelet[2496]: W0905 00:06:39.123680 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.123724 kubelet[2496]: E0905 00:06:39.123696 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.123896 kubelet[2496]: E0905 00:06:39.123882 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.123896 kubelet[2496]: W0905 00:06:39.123892 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.123946 kubelet[2496]: E0905 00:06:39.123905 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.124110 kubelet[2496]: E0905 00:06:39.124096 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.124110 kubelet[2496]: W0905 00:06:39.124108 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.124156 kubelet[2496]: E0905 00:06:39.124121 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.124323 kubelet[2496]: E0905 00:06:39.124308 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.124323 kubelet[2496]: W0905 00:06:39.124321 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.124367 kubelet[2496]: E0905 00:06:39.124336 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.124567 kubelet[2496]: E0905 00:06:39.124553 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.124567 kubelet[2496]: W0905 00:06:39.124564 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.124615 kubelet[2496]: E0905 00:06:39.124591 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.124739 kubelet[2496]: E0905 00:06:39.124726 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.124739 kubelet[2496]: W0905 00:06:39.124736 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.124779 kubelet[2496]: E0905 00:06:39.124764 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.125027 kubelet[2496]: E0905 00:06:39.125008 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.125027 kubelet[2496]: W0905 00:06:39.125021 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.125100 kubelet[2496]: E0905 00:06:39.125036 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.125314 kubelet[2496]: E0905 00:06:39.125299 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.125314 kubelet[2496]: W0905 00:06:39.125312 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.125361 kubelet[2496]: E0905 00:06:39.125321 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.160103 kubelet[2496]: E0905 00:06:39.160068 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.160103 kubelet[2496]: W0905 00:06:39.160089 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.160103 kubelet[2496]: E0905 00:06:39.160112 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.160353 kubelet[2496]: E0905 00:06:39.160332 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:39.160353 kubelet[2496]: W0905 00:06:39.160348 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:39.160400 kubelet[2496]: E0905 00:06:39.160356 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:39.287485 containerd[1460]: time="2025-09-05T00:06:39.287356185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v7t2b,Uid:2c696602-0511-43f0-a2cb-f209c3afa390,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:39.735601 containerd[1460]: time="2025-09-05T00:06:39.735322072Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:39.735601 containerd[1460]: time="2025-09-05T00:06:39.735403736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:39.735601 containerd[1460]: time="2025-09-05T00:06:39.735419375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:39.735601 containerd[1460]: time="2025-09-05T00:06:39.735543759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:39.757968 systemd[1]: Started cri-containerd-754ce91e15f7691620e01ee8ec83c9314f4b7984046562704c94a837cc27e4be.scope - libcontainer container 754ce91e15f7691620e01ee8ec83c9314f4b7984046562704c94a837cc27e4be. Sep 5 00:06:39.778993 containerd[1460]: time="2025-09-05T00:06:39.778950266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v7t2b,Uid:2c696602-0511-43f0-a2cb-f209c3afa390,Namespace:calico-system,Attempt:0,} returns sandbox id \"754ce91e15f7691620e01ee8ec83c9314f4b7984046562704c94a837cc27e4be\"" Sep 5 00:06:40.577880 kubelet[2496]: E0905 00:06:40.577820 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9mdwq" podUID="cb024f3a-da3b-4d8c-8844-6229ed3989be" Sep 5 00:06:42.579850 kubelet[2496]: E0905 00:06:42.579795 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9mdwq" podUID="cb024f3a-da3b-4d8c-8844-6229ed3989be" Sep 5 00:06:43.780948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2124175194.mount: Deactivated successfully. Sep 5 00:06:44.099352 containerd[1460]: time="2025-09-05T00:06:44.099209637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:44.100169 containerd[1460]: time="2025-09-05T00:06:44.100120278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 5 00:06:44.101321 containerd[1460]: time="2025-09-05T00:06:44.101291490Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:44.103260 containerd[1460]: time="2025-09-05T00:06:44.103238450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:44.103895 containerd[1460]: time="2025-09-05T00:06:44.103853536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 5.688227825s" Sep 5 00:06:44.103934 containerd[1460]: time="2025-09-05T00:06:44.103900274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 5 00:06:44.104953 containerd[1460]: time="2025-09-05T00:06:44.104926884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:06:44.119238 containerd[1460]: time="2025-09-05T00:06:44.119185478Z" level=info msg="CreateContainer within sandbox \"e6375ceef20a08efc141fd2ad5254fe581b2ba473953d62dc9530225a8a432ca\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:06:44.136683 containerd[1460]: time="2025-09-05T00:06:44.136615483Z" level=info msg="CreateContainer within sandbox \"e6375ceef20a08efc141fd2ad5254fe581b2ba473953d62dc9530225a8a432ca\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"433525d9e6711a36cbcfa32c4cfa290b1f01ba4677594782f2dc5be1bfee94b9\"" Sep 5 00:06:44.139659 containerd[1460]: time="2025-09-05T00:06:44.139601297Z" level=info msg="StartContainer for \"433525d9e6711a36cbcfa32c4cfa290b1f01ba4677594782f2dc5be1bfee94b9\"" Sep 5 00:06:44.170100 systemd[1]: Started cri-containerd-433525d9e6711a36cbcfa32c4cfa290b1f01ba4677594782f2dc5be1bfee94b9.scope - libcontainer container 433525d9e6711a36cbcfa32c4cfa290b1f01ba4677594782f2dc5be1bfee94b9. Sep 5 00:06:44.212655 containerd[1460]: time="2025-09-05T00:06:44.212602095Z" level=info msg="StartContainer for \"433525d9e6711a36cbcfa32c4cfa290b1f01ba4677594782f2dc5be1bfee94b9\" returns successfully" Sep 5 00:06:44.589805 kubelet[2496]: E0905 00:06:44.589534 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9mdwq" podUID="cb024f3a-da3b-4d8c-8844-6229ed3989be" Sep 5 00:06:44.640572 kubelet[2496]: E0905 00:06:44.640534 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:44.642771 kubelet[2496]: E0905 00:06:44.642743 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.642771 kubelet[2496]: W0905 00:06:44.642764 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.643463 kubelet[2496]: E0905 00:06:44.643424 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.643725 kubelet[2496]: E0905 00:06:44.643704 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.643725 kubelet[2496]: W0905 00:06:44.643717 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.643725 kubelet[2496]: E0905 00:06:44.643727 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.644027 kubelet[2496]: E0905 00:06:44.643998 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.644027 kubelet[2496]: W0905 00:06:44.644011 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.644027 kubelet[2496]: E0905 00:06:44.644019 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.644257 kubelet[2496]: E0905 00:06:44.644240 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.644257 kubelet[2496]: W0905 00:06:44.644250 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.644257 kubelet[2496]: E0905 00:06:44.644257 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.644484 kubelet[2496]: E0905 00:06:44.644466 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.644484 kubelet[2496]: W0905 00:06:44.644476 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.644484 kubelet[2496]: E0905 00:06:44.644484 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.644664 kubelet[2496]: E0905 00:06:44.644647 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.644664 kubelet[2496]: W0905 00:06:44.644657 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.644664 kubelet[2496]: E0905 00:06:44.644665 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.644887 kubelet[2496]: E0905 00:06:44.644844 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.644887 kubelet[2496]: W0905 00:06:44.644861 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.644887 kubelet[2496]: E0905 00:06:44.644871 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.645462 kubelet[2496]: E0905 00:06:44.645092 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.645462 kubelet[2496]: W0905 00:06:44.645102 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.645462 kubelet[2496]: E0905 00:06:44.645114 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.645462 kubelet[2496]: E0905 00:06:44.645337 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.645462 kubelet[2496]: W0905 00:06:44.645346 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.645462 kubelet[2496]: E0905 00:06:44.645355 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.645711 kubelet[2496]: E0905 00:06:44.645599 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.645711 kubelet[2496]: W0905 00:06:44.645608 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.645711 kubelet[2496]: E0905 00:06:44.645617 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.645812 kubelet[2496]: E0905 00:06:44.645781 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.645812 kubelet[2496]: W0905 00:06:44.645794 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.645812 kubelet[2496]: E0905 00:06:44.645802 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.646107 kubelet[2496]: E0905 00:06:44.646079 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.646175 kubelet[2496]: W0905 00:06:44.646105 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.646175 kubelet[2496]: E0905 00:06:44.646133 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.646474 kubelet[2496]: E0905 00:06:44.646453 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.646474 kubelet[2496]: W0905 00:06:44.646469 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.646545 kubelet[2496]: E0905 00:06:44.646484 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.646797 kubelet[2496]: E0905 00:06:44.646776 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.646797 kubelet[2496]: W0905 00:06:44.646791 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.646898 kubelet[2496]: E0905 00:06:44.646803 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.647068 kubelet[2496]: E0905 00:06:44.647049 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.647068 kubelet[2496]: W0905 00:06:44.647064 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.647136 kubelet[2496]: E0905 00:06:44.647076 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.660450 kubelet[2496]: E0905 00:06:44.660410 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.660450 kubelet[2496]: W0905 00:06:44.660436 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.660450 kubelet[2496]: E0905 00:06:44.660455 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.660803 kubelet[2496]: E0905 00:06:44.660791 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.660803 kubelet[2496]: W0905 00:06:44.660800 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.660869 kubelet[2496]: E0905 00:06:44.660808 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.661122 kubelet[2496]: E0905 00:06:44.661097 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.661152 kubelet[2496]: W0905 00:06:44.661126 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.661175 kubelet[2496]: E0905 00:06:44.661153 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.666406 kubelet[2496]: E0905 00:06:44.666096 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.666406 kubelet[2496]: W0905 00:06:44.666125 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.669076 kubelet[2496]: E0905 00:06:44.668519 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.669076 kubelet[2496]: W0905 00:06:44.668546 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.669076 kubelet[2496]: E0905 00:06:44.668808 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.669076 kubelet[2496]: W0905 00:06:44.668819 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.669076 kubelet[2496]: E0905 00:06:44.668859 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.669076 kubelet[2496]: E0905 00:06:44.668966 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.669076 kubelet[2496]: E0905 00:06:44.669016 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.669250 kubelet[2496]: E0905 00:06:44.669174 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.669250 kubelet[2496]: W0905 00:06:44.669188 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.669250 kubelet[2496]: E0905 00:06:44.669204 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.669735 kubelet[2496]: E0905 00:06:44.669712 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.669735 kubelet[2496]: W0905 00:06:44.669732 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.669797 kubelet[2496]: E0905 00:06:44.669747 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.669964 kubelet[2496]: I0905 00:06:44.669925 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cb599c5d5-2dqsh" podStartSLOduration=1.980586294 podStartE2EDuration="7.669912253s" podCreationTimestamp="2025-09-05 00:06:37 +0000 UTC" firstStartedPulling="2025-09-05 00:06:38.415352047 +0000 UTC m=+23.938849621" lastFinishedPulling="2025-09-05 00:06:44.104677996 +0000 UTC m=+29.628175580" observedRunningTime="2025-09-05 00:06:44.668555341 +0000 UTC m=+30.192052915" watchObservedRunningTime="2025-09-05 00:06:44.669912253 +0000 UTC m=+30.193409827" Sep 5 00:06:44.670090 kubelet[2496]: E0905 00:06:44.670068 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.670090 kubelet[2496]: W0905 00:06:44.670088 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.670150 kubelet[2496]: E0905 00:06:44.670102 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.671758 kubelet[2496]: E0905 00:06:44.671736 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.671796 kubelet[2496]: W0905 00:06:44.671756 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.671796 kubelet[2496]: E0905 00:06:44.671772 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.673688 kubelet[2496]: E0905 00:06:44.673674 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.673688 kubelet[2496]: W0905 00:06:44.673686 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.673875 kubelet[2496]: E0905 00:06:44.673797 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.673953 kubelet[2496]: E0905 00:06:44.673941 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.673980 kubelet[2496]: W0905 00:06:44.673952 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.673980 kubelet[2496]: E0905 00:06:44.673962 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.675163 kubelet[2496]: E0905 00:06:44.675114 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.675163 kubelet[2496]: W0905 00:06:44.675138 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.675163 kubelet[2496]: E0905 00:06:44.675157 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.675457 kubelet[2496]: E0905 00:06:44.675420 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.675457 kubelet[2496]: W0905 00:06:44.675452 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.675507 kubelet[2496]: E0905 00:06:44.675463 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.676006 kubelet[2496]: E0905 00:06:44.675726 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.676006 kubelet[2496]: W0905 00:06:44.675738 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.676006 kubelet[2496]: E0905 00:06:44.675778 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.676169 kubelet[2496]: E0905 00:06:44.676035 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.676169 kubelet[2496]: W0905 00:06:44.676047 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.676169 kubelet[2496]: E0905 00:06:44.676058 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.676517 kubelet[2496]: E0905 00:06:44.676497 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.676517 kubelet[2496]: W0905 00:06:44.676514 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.676614 kubelet[2496]: E0905 00:06:44.676534 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:44.676790 kubelet[2496]: E0905 00:06:44.676774 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:44.676790 kubelet[2496]: W0905 00:06:44.676786 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:44.676790 kubelet[2496]: E0905 00:06:44.676796 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.641940 kubelet[2496]: I0905 00:06:45.641898 2496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:06:45.642381 kubelet[2496]: E0905 00:06:45.642300 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:45.652697 kubelet[2496]: E0905 00:06:45.652662 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.652697 kubelet[2496]: W0905 00:06:45.652682 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.652776 kubelet[2496]: E0905 00:06:45.652702 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.653032 kubelet[2496]: E0905 00:06:45.653005 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.653032 kubelet[2496]: W0905 00:06:45.653016 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.653032 kubelet[2496]: E0905 00:06:45.653024 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.653272 kubelet[2496]: E0905 00:06:45.653246 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.653272 kubelet[2496]: W0905 00:06:45.653258 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.653272 kubelet[2496]: E0905 00:06:45.653265 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.653545 kubelet[2496]: E0905 00:06:45.653517 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.653545 kubelet[2496]: W0905 00:06:45.653529 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.653545 kubelet[2496]: E0905 00:06:45.653537 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.653769 kubelet[2496]: E0905 00:06:45.653743 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.653769 kubelet[2496]: W0905 00:06:45.653755 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.653769 kubelet[2496]: E0905 00:06:45.653765 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.653991 kubelet[2496]: E0905 00:06:45.653975 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.653991 kubelet[2496]: W0905 00:06:45.653985 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.653991 kubelet[2496]: E0905 00:06:45.653992 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.654201 kubelet[2496]: E0905 00:06:45.654185 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.654201 kubelet[2496]: W0905 00:06:45.654196 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.654201 kubelet[2496]: E0905 00:06:45.654203 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.654430 kubelet[2496]: E0905 00:06:45.654404 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.654430 kubelet[2496]: W0905 00:06:45.654423 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.654491 kubelet[2496]: E0905 00:06:45.654433 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.654658 kubelet[2496]: E0905 00:06:45.654641 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.654658 kubelet[2496]: W0905 00:06:45.654651 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.654658 kubelet[2496]: E0905 00:06:45.654659 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.654890 kubelet[2496]: E0905 00:06:45.654873 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.654890 kubelet[2496]: W0905 00:06:45.654884 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.654890 kubelet[2496]: E0905 00:06:45.654891 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.655101 kubelet[2496]: E0905 00:06:45.655084 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.655101 kubelet[2496]: W0905 00:06:45.655094 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.655101 kubelet[2496]: E0905 00:06:45.655101 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.655311 kubelet[2496]: E0905 00:06:45.655294 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.655311 kubelet[2496]: W0905 00:06:45.655305 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.655311 kubelet[2496]: E0905 00:06:45.655312 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.655540 kubelet[2496]: E0905 00:06:45.655524 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.655540 kubelet[2496]: W0905 00:06:45.655534 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.655540 kubelet[2496]: E0905 00:06:45.655541 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.655753 kubelet[2496]: E0905 00:06:45.655736 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.655753 kubelet[2496]: W0905 00:06:45.655746 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.655753 kubelet[2496]: E0905 00:06:45.655754 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.655984 kubelet[2496]: E0905 00:06:45.655966 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.655984 kubelet[2496]: W0905 00:06:45.655977 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.655984 kubelet[2496]: E0905 00:06:45.655984 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.667433 kubelet[2496]: E0905 00:06:45.667381 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.667433 kubelet[2496]: W0905 00:06:45.667401 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.667536 kubelet[2496]: E0905 00:06:45.667438 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.667757 kubelet[2496]: E0905 00:06:45.667727 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.667757 kubelet[2496]: W0905 00:06:45.667740 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.667845 kubelet[2496]: E0905 00:06:45.667760 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.668030 kubelet[2496]: E0905 00:06:45.667999 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.668030 kubelet[2496]: W0905 00:06:45.668017 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.668103 kubelet[2496]: E0905 00:06:45.668036 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.668257 kubelet[2496]: E0905 00:06:45.668229 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.668257 kubelet[2496]: W0905 00:06:45.668241 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.668257 kubelet[2496]: E0905 00:06:45.668252 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.668527 kubelet[2496]: E0905 00:06:45.668495 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.668527 kubelet[2496]: W0905 00:06:45.668507 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.668527 kubelet[2496]: E0905 00:06:45.668519 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.668792 kubelet[2496]: E0905 00:06:45.668766 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.668792 kubelet[2496]: W0905 00:06:45.668784 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.668870 kubelet[2496]: E0905 00:06:45.668803 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.669113 kubelet[2496]: E0905 00:06:45.669090 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.669113 kubelet[2496]: W0905 00:06:45.669104 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.669194 kubelet[2496]: E0905 00:06:45.669123 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.669381 kubelet[2496]: E0905 00:06:45.669361 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.669381 kubelet[2496]: W0905 00:06:45.669373 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.669458 kubelet[2496]: E0905 00:06:45.669390 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.669654 kubelet[2496]: E0905 00:06:45.669634 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.669654 kubelet[2496]: W0905 00:06:45.669646 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.669728 kubelet[2496]: E0905 00:06:45.669663 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.669960 kubelet[2496]: E0905 00:06:45.669944 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.669960 kubelet[2496]: W0905 00:06:45.669956 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.670037 kubelet[2496]: E0905 00:06:45.669988 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.670242 kubelet[2496]: E0905 00:06:45.670227 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.670242 kubelet[2496]: W0905 00:06:45.670238 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.670318 kubelet[2496]: E0905 00:06:45.670268 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.670531 kubelet[2496]: E0905 00:06:45.670511 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.670531 kubelet[2496]: W0905 00:06:45.670523 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.670599 kubelet[2496]: E0905 00:06:45.670536 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.670728 kubelet[2496]: E0905 00:06:45.670708 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.670728 kubelet[2496]: W0905 00:06:45.670719 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.670795 kubelet[2496]: E0905 00:06:45.670733 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.671019 kubelet[2496]: E0905 00:06:45.670988 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.671019 kubelet[2496]: W0905 00:06:45.671006 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.671019 kubelet[2496]: E0905 00:06:45.671024 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.671316 kubelet[2496]: E0905 00:06:45.671297 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.671316 kubelet[2496]: W0905 00:06:45.671314 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.671386 kubelet[2496]: E0905 00:06:45.671332 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.671651 kubelet[2496]: E0905 00:06:45.671637 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.671680 kubelet[2496]: W0905 00:06:45.671652 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.671709 kubelet[2496]: E0905 00:06:45.671690 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.671912 kubelet[2496]: E0905 00:06:45.671897 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.671944 kubelet[2496]: W0905 00:06:45.671914 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.671944 kubelet[2496]: E0905 00:06:45.671926 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:45.672177 kubelet[2496]: E0905 00:06:45.672160 2496 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:45.672177 kubelet[2496]: W0905 00:06:45.672175 2496 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:45.672219 kubelet[2496]: E0905 00:06:45.672186 2496 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:46.583721 kubelet[2496]: E0905 00:06:46.583672 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9mdwq" podUID="cb024f3a-da3b-4d8c-8844-6229ed3989be" Sep 5 00:06:47.420117 containerd[1460]: time="2025-09-05T00:06:47.420050942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:47.433934 containerd[1460]: time="2025-09-05T00:06:47.433883284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 5 00:06:47.456385 containerd[1460]: time="2025-09-05T00:06:47.456342404Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:47.464917 containerd[1460]: time="2025-09-05T00:06:47.464880988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:47.465468 containerd[1460]: time="2025-09-05T00:06:47.465405003Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 3.360443744s" Sep 5 00:06:47.465468 containerd[1460]: time="2025-09-05T00:06:47.465463222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 5 00:06:47.467808 containerd[1460]: time="2025-09-05T00:06:47.467779143Z" level=info msg="CreateContainer within sandbox \"754ce91e15f7691620e01ee8ec83c9314f4b7984046562704c94a837cc27e4be\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:06:47.793811 containerd[1460]: time="2025-09-05T00:06:47.793681729Z" level=info msg="CreateContainer within sandbox \"754ce91e15f7691620e01ee8ec83c9314f4b7984046562704c94a837cc27e4be\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5aaa394cef2253f25dd0953c4a3e308d3c2129a7d009ea8e98b5af48290ffb55\"" Sep 5 00:06:47.794506 containerd[1460]: time="2025-09-05T00:06:47.794089234Z" level=info msg="StartContainer for \"5aaa394cef2253f25dd0953c4a3e308d3c2129a7d009ea8e98b5af48290ffb55\"" Sep 5 00:06:47.828982 systemd[1]: Started cri-containerd-5aaa394cef2253f25dd0953c4a3e308d3c2129a7d009ea8e98b5af48290ffb55.scope - libcontainer container 5aaa394cef2253f25dd0953c4a3e308d3c2129a7d009ea8e98b5af48290ffb55. Sep 5 00:06:47.873505 systemd[1]: cri-containerd-5aaa394cef2253f25dd0953c4a3e308d3c2129a7d009ea8e98b5af48290ffb55.scope: Deactivated successfully. Sep 5 00:06:47.950101 containerd[1460]: time="2025-09-05T00:06:47.950043048Z" level=info msg="StartContainer for \"5aaa394cef2253f25dd0953c4a3e308d3c2129a7d009ea8e98b5af48290ffb55\" returns successfully" Sep 5 00:06:47.974075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5aaa394cef2253f25dd0953c4a3e308d3c2129a7d009ea8e98b5af48290ffb55-rootfs.mount: Deactivated successfully. Sep 5 00:06:48.577607 kubelet[2496]: E0905 00:06:48.577550 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9mdwq" podUID="cb024f3a-da3b-4d8c-8844-6229ed3989be" Sep 5 00:06:48.687307 containerd[1460]: time="2025-09-05T00:06:48.687229639Z" level=info msg="shim disconnected" id=5aaa394cef2253f25dd0953c4a3e308d3c2129a7d009ea8e98b5af48290ffb55 namespace=k8s.io Sep 5 00:06:48.687307 containerd[1460]: time="2025-09-05T00:06:48.687304579Z" level=warning msg="cleaning up after shim disconnected" id=5aaa394cef2253f25dd0953c4a3e308d3c2129a7d009ea8e98b5af48290ffb55 namespace=k8s.io Sep 5 00:06:48.687882 containerd[1460]: time="2025-09-05T00:06:48.687317093Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:06:49.653290 containerd[1460]: time="2025-09-05T00:06:49.651965400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:06:50.577322 kubelet[2496]: E0905 00:06:50.577238 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9mdwq" podUID="cb024f3a-da3b-4d8c-8844-6229ed3989be" Sep 5 00:06:52.578154 kubelet[2496]: E0905 00:06:52.578091 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9mdwq" podUID="cb024f3a-da3b-4d8c-8844-6229ed3989be" Sep 5 00:06:54.074134 systemd[1]: Started sshd@7-10.0.0.21:22-10.0.0.1:40056.service - OpenSSH per-connection server daemon (10.0.0.1:40056). Sep 5 00:06:54.106477 sshd[3279]: Accepted publickey for core from 10.0.0.1 port 40056 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:06:54.108528 sshd[3279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:54.113349 systemd-logind[1438]: New session 8 of user core. Sep 5 00:06:54.117990 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:06:54.577541 kubelet[2496]: E0905 00:06:54.577473 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-9mdwq" podUID="cb024f3a-da3b-4d8c-8844-6229ed3989be" Sep 5 00:06:55.066149 sshd[3279]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:55.069769 systemd[1]: sshd@7-10.0.0.21:22-10.0.0.1:40056.service: Deactivated successfully. Sep 5 00:06:55.071473 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:06:55.072280 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:06:55.073162 systemd-logind[1438]: Removed session 8. Sep 5 00:06:55.140578 containerd[1460]: time="2025-09-05T00:06:55.140512871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:55.141463 containerd[1460]: time="2025-09-05T00:06:55.141413551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 5 00:06:55.145821 containerd[1460]: time="2025-09-05T00:06:55.145783365Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:55.149113 containerd[1460]: time="2025-09-05T00:06:55.149079964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:55.149952 containerd[1460]: time="2025-09-05T00:06:55.149915212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.497898887s" Sep 5 00:06:55.150003 containerd[1460]: time="2025-09-05T00:06:55.149951310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 5 00:06:55.152000 containerd[1460]: time="2025-09-05T00:06:55.151958999Z" level=info msg="CreateContainer within sandbox \"754ce91e15f7691620e01ee8ec83c9314f4b7984046562704c94a837cc27e4be\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:06:55.173015 containerd[1460]: time="2025-09-05T00:06:55.172958353Z" level=info msg="CreateContainer within sandbox \"754ce91e15f7691620e01ee8ec83c9314f4b7984046562704c94a837cc27e4be\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"795dc1d3d5684412f5df938e7e5a69d17a871d1ffef4ef23a08aad01b8c47e08\"" Sep 5 00:06:55.173502 containerd[1460]: time="2025-09-05T00:06:55.173470664Z" level=info msg="StartContainer for \"795dc1d3d5684412f5df938e7e5a69d17a871d1ffef4ef23a08aad01b8c47e08\"" Sep 5 00:06:55.205994 systemd[1]: Started cri-containerd-795dc1d3d5684412f5df938e7e5a69d17a871d1ffef4ef23a08aad01b8c47e08.scope - libcontainer container 795dc1d3d5684412f5df938e7e5a69d17a871d1ffef4ef23a08aad01b8c47e08. Sep 5 00:06:55.240391 containerd[1460]: time="2025-09-05T00:06:55.240326227Z" level=info msg="StartContainer for \"795dc1d3d5684412f5df938e7e5a69d17a871d1ffef4ef23a08aad01b8c47e08\" returns successfully" Sep 5 00:06:56.475305 systemd[1]: cri-containerd-795dc1d3d5684412f5df938e7e5a69d17a871d1ffef4ef23a08aad01b8c47e08.scope: Deactivated successfully. Sep 5 00:06:56.498484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-795dc1d3d5684412f5df938e7e5a69d17a871d1ffef4ef23a08aad01b8c47e08-rootfs.mount: Deactivated successfully. Sep 5 00:06:56.554540 kubelet[2496]: I0905 00:06:56.554508 2496 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 00:06:56.588568 systemd[1]: Created slice kubepods-besteffort-podcb024f3a_da3b_4d8c_8844_6229ed3989be.slice - libcontainer container kubepods-besteffort-podcb024f3a_da3b_4d8c_8844_6229ed3989be.slice. Sep 5 00:06:56.591909 containerd[1460]: time="2025-09-05T00:06:56.591862302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9mdwq,Uid:cb024f3a-da3b-4d8c-8844-6229ed3989be,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:56.785921 systemd[1]: Created slice kubepods-burstable-pod81442ceb_28f7_40f4_af38_c4787936e8ed.slice - libcontainer container kubepods-burstable-pod81442ceb_28f7_40f4_af38_c4787936e8ed.slice. Sep 5 00:06:56.791577 systemd[1]: Created slice kubepods-burstable-pod39d8bbc4_53e3_418e_a624_98460b3ce6db.slice - libcontainer container kubepods-burstable-pod39d8bbc4_53e3_418e_a624_98460b3ce6db.slice. Sep 5 00:06:56.796039 systemd[1]: Created slice kubepods-besteffort-pod5849096c_759f_41e6_9d07_d3b4efa485b6.slice - libcontainer container kubepods-besteffort-pod5849096c_759f_41e6_9d07_d3b4efa485b6.slice. Sep 5 00:06:56.800950 systemd[1]: Created slice kubepods-besteffort-pod870d18b6_80da_4049_85fc_c52675a83993.slice - libcontainer container kubepods-besteffort-pod870d18b6_80da_4049_85fc_c52675a83993.slice. Sep 5 00:06:56.804085 systemd[1]: Created slice kubepods-besteffort-podf01083ce_5e4f_4b16_83b8_1cdba0c5ae5e.slice - libcontainer container kubepods-besteffort-podf01083ce_5e4f_4b16_83b8_1cdba0c5ae5e.slice. Sep 5 00:06:56.807997 systemd[1]: Created slice kubepods-besteffort-pod75204577_0370_4fa9_a0b9_0d372c897a36.slice - libcontainer container kubepods-besteffort-pod75204577_0370_4fa9_a0b9_0d372c897a36.slice. Sep 5 00:06:56.850991 kubelet[2496]: I0905 00:06:56.850956 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvdbk\" (UniqueName: \"kubernetes.io/projected/870d18b6-80da-4049-85fc-c52675a83993-kube-api-access-dvdbk\") pod \"calico-apiserver-6f6b95c467-9vf8n\" (UID: \"870d18b6-80da-4049-85fc-c52675a83993\") " pod="calico-apiserver/calico-apiserver-6f6b95c467-9vf8n" Sep 5 00:06:56.851097 kubelet[2496]: I0905 00:06:56.851004 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75204577-0370-4fa9-a0b9-0d372c897a36-config\") pod \"goldmane-54d579b49d-66gx6\" (UID: \"75204577-0370-4fa9-a0b9-0d372c897a36\") " pod="calico-system/goldmane-54d579b49d-66gx6" Sep 5 00:06:56.851097 kubelet[2496]: I0905 00:06:56.851030 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72q6v\" (UniqueName: \"kubernetes.io/projected/f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e-kube-api-access-72q6v\") pod \"calico-kube-controllers-dc6b5cb6f-d46dk\" (UID: \"f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e\") " pod="calico-system/calico-kube-controllers-dc6b5cb6f-d46dk" Sep 5 00:06:56.851097 kubelet[2496]: I0905 00:06:56.851054 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/870d18b6-80da-4049-85fc-c52675a83993-calico-apiserver-certs\") pod \"calico-apiserver-6f6b95c467-9vf8n\" (UID: \"870d18b6-80da-4049-85fc-c52675a83993\") " pod="calico-apiserver/calico-apiserver-6f6b95c467-9vf8n" Sep 5 00:06:56.851097 kubelet[2496]: I0905 00:06:56.851090 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg846\" (UniqueName: \"kubernetes.io/projected/5849096c-759f-41e6-9d07-d3b4efa485b6-kube-api-access-cg846\") pod \"calico-apiserver-6f6b95c467-pn4h6\" (UID: \"5849096c-759f-41e6-9d07-d3b4efa485b6\") " pod="calico-apiserver/calico-apiserver-6f6b95c467-pn4h6" Sep 5 00:06:56.851253 kubelet[2496]: I0905 00:06:56.851129 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81442ceb-28f7-40f4-af38-c4787936e8ed-config-volume\") pod \"coredns-668d6bf9bc-78zh5\" (UID: \"81442ceb-28f7-40f4-af38-c4787936e8ed\") " pod="kube-system/coredns-668d6bf9bc-78zh5" Sep 5 00:06:56.851253 kubelet[2496]: I0905 00:06:56.851149 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/75204577-0370-4fa9-a0b9-0d372c897a36-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-66gx6\" (UID: \"75204577-0370-4fa9-a0b9-0d372c897a36\") " pod="calico-system/goldmane-54d579b49d-66gx6" Sep 5 00:06:56.851253 kubelet[2496]: I0905 00:06:56.851192 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frctp\" (UniqueName: \"kubernetes.io/projected/75204577-0370-4fa9-a0b9-0d372c897a36-kube-api-access-frctp\") pod \"goldmane-54d579b49d-66gx6\" (UID: \"75204577-0370-4fa9-a0b9-0d372c897a36\") " pod="calico-system/goldmane-54d579b49d-66gx6" Sep 5 00:06:56.851253 kubelet[2496]: I0905 00:06:56.851220 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw642\" (UniqueName: \"kubernetes.io/projected/39d8bbc4-53e3-418e-a624-98460b3ce6db-kube-api-access-cw642\") pod \"coredns-668d6bf9bc-6qqh7\" (UID: \"39d8bbc4-53e3-418e-a624-98460b3ce6db\") " pod="kube-system/coredns-668d6bf9bc-6qqh7" Sep 5 00:06:56.851390 kubelet[2496]: I0905 00:06:56.851252 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/75204577-0370-4fa9-a0b9-0d372c897a36-goldmane-key-pair\") pod \"goldmane-54d579b49d-66gx6\" (UID: \"75204577-0370-4fa9-a0b9-0d372c897a36\") " pod="calico-system/goldmane-54d579b49d-66gx6" Sep 5 00:06:56.851390 kubelet[2496]: I0905 00:06:56.851272 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e-tigera-ca-bundle\") pod \"calico-kube-controllers-dc6b5cb6f-d46dk\" (UID: \"f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e\") " pod="calico-system/calico-kube-controllers-dc6b5cb6f-d46dk" Sep 5 00:06:56.851390 kubelet[2496]: I0905 00:06:56.851286 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6v6k\" (UniqueName: \"kubernetes.io/projected/81442ceb-28f7-40f4-af38-c4787936e8ed-kube-api-access-j6v6k\") pod \"coredns-668d6bf9bc-78zh5\" (UID: \"81442ceb-28f7-40f4-af38-c4787936e8ed\") " pod="kube-system/coredns-668d6bf9bc-78zh5" Sep 5 00:06:56.851390 kubelet[2496]: I0905 00:06:56.851303 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39d8bbc4-53e3-418e-a624-98460b3ce6db-config-volume\") pod \"coredns-668d6bf9bc-6qqh7\" (UID: \"39d8bbc4-53e3-418e-a624-98460b3ce6db\") " pod="kube-system/coredns-668d6bf9bc-6qqh7" Sep 5 00:06:56.851390 kubelet[2496]: I0905 00:06:56.851317 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5849096c-759f-41e6-9d07-d3b4efa485b6-calico-apiserver-certs\") pod \"calico-apiserver-6f6b95c467-pn4h6\" (UID: \"5849096c-759f-41e6-9d07-d3b4efa485b6\") " pod="calico-apiserver/calico-apiserver-6f6b95c467-pn4h6" Sep 5 00:06:56.943035 systemd[1]: Created slice kubepods-besteffort-pod294d5db6_d083_4e8f_8a08_06a5c0441597.slice - libcontainer container kubepods-besteffort-pod294d5db6_d083_4e8f_8a08_06a5c0441597.slice. Sep 5 00:06:57.052941 kubelet[2496]: I0905 00:06:57.052759 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/294d5db6-d083-4e8f-8a08-06a5c0441597-whisker-backend-key-pair\") pod \"whisker-55d99f4944-556sk\" (UID: \"294d5db6-d083-4e8f-8a08-06a5c0441597\") " pod="calico-system/whisker-55d99f4944-556sk" Sep 5 00:06:57.053093 kubelet[2496]: I0905 00:06:57.052949 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/294d5db6-d083-4e8f-8a08-06a5c0441597-whisker-ca-bundle\") pod \"whisker-55d99f4944-556sk\" (UID: \"294d5db6-d083-4e8f-8a08-06a5c0441597\") " pod="calico-system/whisker-55d99f4944-556sk" Sep 5 00:06:57.053093 kubelet[2496]: I0905 00:06:57.053010 2496 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtjmq\" (UniqueName: \"kubernetes.io/projected/294d5db6-d083-4e8f-8a08-06a5c0441597-kube-api-access-gtjmq\") pod \"whisker-55d99f4944-556sk\" (UID: \"294d5db6-d083-4e8f-8a08-06a5c0441597\") " pod="calico-system/whisker-55d99f4944-556sk" Sep 5 00:06:57.088420 containerd[1460]: time="2025-09-05T00:06:57.088341885Z" level=info msg="shim disconnected" id=795dc1d3d5684412f5df938e7e5a69d17a871d1ffef4ef23a08aad01b8c47e08 namespace=k8s.io Sep 5 00:06:57.088420 containerd[1460]: time="2025-09-05T00:06:57.088407368Z" level=warning msg="cleaning up after shim disconnected" id=795dc1d3d5684412f5df938e7e5a69d17a871d1ffef4ef23a08aad01b8c47e08 namespace=k8s.io Sep 5 00:06:57.088420 containerd[1460]: time="2025-09-05T00:06:57.088418288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:06:57.152372 containerd[1460]: time="2025-09-05T00:06:57.151575039Z" level=warning msg="cleanup warnings time=\"2025-09-05T00:06:57Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 5 00:06:57.214627 containerd[1460]: time="2025-09-05T00:06:57.214553996Z" level=error msg="Failed to destroy network for sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.215122 containerd[1460]: time="2025-09-05T00:06:57.215082618Z" level=error msg="encountered an error cleaning up failed sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.215187 containerd[1460]: time="2025-09-05T00:06:57.215158461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9mdwq,Uid:cb024f3a-da3b-4d8c-8844-6229ed3989be,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.225058 kubelet[2496]: E0905 00:06:57.224994 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.225125 kubelet[2496]: E0905 00:06:57.225085 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9mdwq" Sep 5 00:06:57.225125 kubelet[2496]: E0905 00:06:57.225107 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-9mdwq" Sep 5 00:06:57.225193 kubelet[2496]: E0905 00:06:57.225166 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-9mdwq_calico-system(cb024f3a-da3b-4d8c-8844-6229ed3989be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-9mdwq_calico-system(cb024f3a-da3b-4d8c-8844-6229ed3989be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9mdwq" podUID="cb024f3a-da3b-4d8c-8844-6229ed3989be" Sep 5 00:06:57.246506 containerd[1460]: time="2025-09-05T00:06:57.246454006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55d99f4944-556sk,Uid:294d5db6-d083-4e8f-8a08-06a5c0441597,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:57.310921 containerd[1460]: time="2025-09-05T00:06:57.310748712Z" level=error msg="Failed to destroy network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.311273 containerd[1460]: time="2025-09-05T00:06:57.311244913Z" level=error msg="encountered an error cleaning up failed sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.311346 containerd[1460]: time="2025-09-05T00:06:57.311303614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55d99f4944-556sk,Uid:294d5db6-d083-4e8f-8a08-06a5c0441597,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.311599 kubelet[2496]: E0905 00:06:57.311558 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.311659 kubelet[2496]: E0905 00:06:57.311614 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55d99f4944-556sk" Sep 5 00:06:57.311659 kubelet[2496]: E0905 00:06:57.311636 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55d99f4944-556sk" Sep 5 00:06:57.311722 kubelet[2496]: E0905 00:06:57.311680 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55d99f4944-556sk_calico-system(294d5db6-d083-4e8f-8a08-06a5c0441597)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55d99f4944-556sk_calico-system(294d5db6-d083-4e8f-8a08-06a5c0441597)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55d99f4944-556sk" podUID="294d5db6-d083-4e8f-8a08-06a5c0441597" Sep 5 00:06:57.389498 kubelet[2496]: E0905 00:06:57.389424 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:57.391223 containerd[1460]: time="2025-09-05T00:06:57.391179108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-78zh5,Uid:81442ceb-28f7-40f4-af38-c4787936e8ed,Namespace:kube-system,Attempt:0,}" Sep 5 00:06:57.395593 kubelet[2496]: E0905 00:06:57.395491 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:57.396233 containerd[1460]: time="2025-09-05T00:06:57.396180094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6qqh7,Uid:39d8bbc4-53e3-418e-a624-98460b3ce6db,Namespace:kube-system,Attempt:0,}" Sep 5 00:06:57.398793 containerd[1460]: time="2025-09-05T00:06:57.398719692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6b95c467-pn4h6,Uid:5849096c-759f-41e6-9d07-d3b4efa485b6,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:06:57.403407 containerd[1460]: time="2025-09-05T00:06:57.403374319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6b95c467-9vf8n,Uid:870d18b6-80da-4049-85fc-c52675a83993,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:06:57.410982 containerd[1460]: time="2025-09-05T00:06:57.410944519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-66gx6,Uid:75204577-0370-4fa9-a0b9-0d372c897a36,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:57.426225 containerd[1460]: time="2025-09-05T00:06:57.426065222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc6b5cb6f-d46dk,Uid:f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:57.473113 containerd[1460]: time="2025-09-05T00:06:57.472940150Z" level=error msg="Failed to destroy network for sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.481054 containerd[1460]: time="2025-09-05T00:06:57.480987877Z" level=error msg="encountered an error cleaning up failed sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.481054 containerd[1460]: time="2025-09-05T00:06:57.481055233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-78zh5,Uid:81442ceb-28f7-40f4-af38-c4787936e8ed,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.481365 kubelet[2496]: E0905 00:06:57.481326 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.481435 kubelet[2496]: E0905 00:06:57.481391 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-78zh5" Sep 5 00:06:57.481435 kubelet[2496]: E0905 00:06:57.481417 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-78zh5" Sep 5 00:06:57.481507 kubelet[2496]: E0905 00:06:57.481462 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-78zh5_kube-system(81442ceb-28f7-40f4-af38-c4787936e8ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-78zh5_kube-system(81442ceb-28f7-40f4-af38-c4787936e8ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-78zh5" podUID="81442ceb-28f7-40f4-af38-c4787936e8ed" Sep 5 00:06:57.550157 containerd[1460]: time="2025-09-05T00:06:57.550095056Z" level=error msg="Failed to destroy network for sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.550822 containerd[1460]: time="2025-09-05T00:06:57.550655159Z" level=error msg="encountered an error cleaning up failed sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.550822 containerd[1460]: time="2025-09-05T00:06:57.550705072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc6b5cb6f-d46dk,Uid:f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.552814 kubelet[2496]: E0905 00:06:57.552764 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.552889 kubelet[2496]: E0905 00:06:57.552842 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dc6b5cb6f-d46dk" Sep 5 00:06:57.552889 kubelet[2496]: E0905 00:06:57.552870 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dc6b5cb6f-d46dk" Sep 5 00:06:57.552823 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df-shm.mount: Deactivated successfully. Sep 5 00:06:57.553233 kubelet[2496]: E0905 00:06:57.552923 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dc6b5cb6f-d46dk_calico-system(f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dc6b5cb6f-d46dk_calico-system(f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dc6b5cb6f-d46dk" podUID="f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e" Sep 5 00:06:57.564568 containerd[1460]: time="2025-09-05T00:06:57.564355685Z" level=error msg="Failed to destroy network for sandbox \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.567867 containerd[1460]: time="2025-09-05T00:06:57.564757960Z" level=error msg="Failed to destroy network for sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.568080 containerd[1460]: time="2025-09-05T00:06:57.567948299Z" level=error msg="Failed to destroy network for sandbox \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.568524 containerd[1460]: time="2025-09-05T00:06:57.568462474Z" level=error msg="encountered an error cleaning up failed sandbox \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.570324 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90-shm.mount: Deactivated successfully. Sep 5 00:06:57.571099 containerd[1460]: time="2025-09-05T00:06:57.568678189Z" level=error msg="encountered an error cleaning up failed sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.571099 containerd[1460]: time="2025-09-05T00:06:57.570936097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-66gx6,Uid:75204577-0370-4fa9-a0b9-0d372c897a36,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.571099 containerd[1460]: time="2025-09-05T00:06:57.570873310Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6qqh7,Uid:39d8bbc4-53e3-418e-a624-98460b3ce6db,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.571252 kubelet[2496]: E0905 00:06:57.571181 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.571252 kubelet[2496]: E0905 00:06:57.571219 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.571594 kubelet[2496]: E0905 00:06:57.571290 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-66gx6" Sep 5 00:06:57.571594 kubelet[2496]: E0905 00:06:57.571316 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-66gx6" Sep 5 00:06:57.571594 kubelet[2496]: E0905 00:06:57.571245 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6qqh7" Sep 5 00:06:57.571594 kubelet[2496]: E0905 00:06:57.571342 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-6qqh7" Sep 5 00:06:57.571704 kubelet[2496]: E0905 00:06:57.571357 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-66gx6_calico-system(75204577-0370-4fa9-a0b9-0d372c897a36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-66gx6_calico-system(75204577-0370-4fa9-a0b9-0d372c897a36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-66gx6" podUID="75204577-0370-4fa9-a0b9-0d372c897a36" Sep 5 00:06:57.571704 kubelet[2496]: E0905 00:06:57.571388 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-6qqh7_kube-system(39d8bbc4-53e3-418e-a624-98460b3ce6db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-6qqh7_kube-system(39d8bbc4-53e3-418e-a624-98460b3ce6db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6qqh7" podUID="39d8bbc4-53e3-418e-a624-98460b3ce6db" Sep 5 00:06:57.573057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48-shm.mount: Deactivated successfully. Sep 5 00:06:57.574129 containerd[1460]: time="2025-09-05T00:06:57.574096410Z" level=error msg="Failed to destroy network for sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.575859 containerd[1460]: time="2025-09-05T00:06:57.574548018Z" level=error msg="encountered an error cleaning up failed sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.575859 containerd[1460]: time="2025-09-05T00:06:57.574595246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6b95c467-9vf8n,Uid:870d18b6-80da-4049-85fc-c52675a83993,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.575959 kubelet[2496]: E0905 00:06:57.575691 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.575959 kubelet[2496]: E0905 00:06:57.575731 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f6b95c467-9vf8n" Sep 5 00:06:57.575959 kubelet[2496]: E0905 00:06:57.575747 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f6b95c467-9vf8n" Sep 5 00:06:57.576040 kubelet[2496]: E0905 00:06:57.575788 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f6b95c467-9vf8n_calico-apiserver(870d18b6-80da-4049-85fc-c52675a83993)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f6b95c467-9vf8n_calico-apiserver(870d18b6-80da-4049-85fc-c52675a83993)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f6b95c467-9vf8n" podUID="870d18b6-80da-4049-85fc-c52675a83993" Sep 5 00:06:57.576338 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28-shm.mount: Deactivated successfully. Sep 5 00:06:57.592071 containerd[1460]: time="2025-09-05T00:06:57.592017098Z" level=error msg="encountered an error cleaning up failed sandbox \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.592492 containerd[1460]: time="2025-09-05T00:06:57.592090666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6b95c467-pn4h6,Uid:5849096c-759f-41e6-9d07-d3b4efa485b6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.592570 kubelet[2496]: E0905 00:06:57.592297 2496 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.592570 kubelet[2496]: E0905 00:06:57.592349 2496 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f6b95c467-pn4h6" Sep 5 00:06:57.592570 kubelet[2496]: E0905 00:06:57.592370 2496 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f6b95c467-pn4h6" Sep 5 00:06:57.592676 kubelet[2496]: E0905 00:06:57.592411 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f6b95c467-pn4h6_calico-apiserver(5849096c-759f-41e6-9d07-d3b4efa485b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f6b95c467-pn4h6_calico-apiserver(5849096c-759f-41e6-9d07-d3b4efa485b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f6b95c467-pn4h6" podUID="5849096c-759f-41e6-9d07-d3b4efa485b6" Sep 5 00:06:57.666532 kubelet[2496]: I0905 00:06:57.666499 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:06:57.669044 kubelet[2496]: I0905 00:06:57.668764 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:06:57.670031 kubelet[2496]: I0905 00:06:57.669821 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:06:57.671703 kubelet[2496]: I0905 00:06:57.671684 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" Sep 5 00:06:57.672677 containerd[1460]: time="2025-09-05T00:06:57.672639383Z" level=info msg="StopPodSandbox for \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\"" Sep 5 00:06:57.675190 containerd[1460]: time="2025-09-05T00:06:57.675133795Z" level=info msg="StopPodSandbox for \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\"" Sep 5 00:06:57.675375 containerd[1460]: time="2025-09-05T00:06:57.675340835Z" level=info msg="StopPodSandbox for \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\"" Sep 5 00:06:57.675846 containerd[1460]: time="2025-09-05T00:06:57.675590443Z" level=info msg="StopPodSandbox for \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\"" Sep 5 00:06:57.680439 containerd[1460]: time="2025-09-05T00:06:57.680395983Z" level=info msg="Ensure that sandbox 1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd in task-service has been cleanup successfully" Sep 5 00:06:57.680683 containerd[1460]: time="2025-09-05T00:06:57.680402165Z" level=info msg="Ensure that sandbox dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c in task-service has been cleanup successfully" Sep 5 00:06:57.682098 containerd[1460]: time="2025-09-05T00:06:57.682072721Z" level=info msg="Ensure that sandbox da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28 in task-service has been cleanup successfully" Sep 5 00:06:57.682338 containerd[1460]: time="2025-09-05T00:06:57.682313272Z" level=info msg="Ensure that sandbox c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48 in task-service has been cleanup successfully" Sep 5 00:06:57.685018 kubelet[2496]: I0905 00:06:57.684467 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" Sep 5 00:06:57.686071 containerd[1460]: time="2025-09-05T00:06:57.686022244Z" level=info msg="StopPodSandbox for \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\"" Sep 5 00:06:57.686249 containerd[1460]: time="2025-09-05T00:06:57.686221929Z" level=info msg="Ensure that sandbox 32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90 in task-service has been cleanup successfully" Sep 5 00:06:57.688946 kubelet[2496]: I0905 00:06:57.688913 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:06:57.690200 containerd[1460]: time="2025-09-05T00:06:57.689879645Z" level=info msg="StopPodSandbox for \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\"" Sep 5 00:06:57.690431 containerd[1460]: time="2025-09-05T00:06:57.690376938Z" level=info msg="Ensure that sandbox f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457 in task-service has been cleanup successfully" Sep 5 00:06:57.705852 containerd[1460]: time="2025-09-05T00:06:57.705575508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:06:57.710419 kubelet[2496]: I0905 00:06:57.710381 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:06:57.717261 containerd[1460]: time="2025-09-05T00:06:57.717212120Z" level=info msg="StopPodSandbox for \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\"" Sep 5 00:06:57.719946 containerd[1460]: time="2025-09-05T00:06:57.719911658Z" level=info msg="Ensure that sandbox bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df in task-service has been cleanup successfully" Sep 5 00:06:57.728820 containerd[1460]: time="2025-09-05T00:06:57.728765757Z" level=error msg="StopPodSandbox for \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\" failed" error="failed to destroy network for sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.729247 kubelet[2496]: E0905 00:06:57.729014 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:06:57.729247 kubelet[2496]: E0905 00:06:57.729084 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48"} Sep 5 00:06:57.729247 kubelet[2496]: E0905 00:06:57.729164 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75204577-0370-4fa9-a0b9-0d372c897a36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:57.729247 kubelet[2496]: E0905 00:06:57.729210 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75204577-0370-4fa9-a0b9-0d372c897a36\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-66gx6" podUID="75204577-0370-4fa9-a0b9-0d372c897a36" Sep 5 00:06:57.729444 containerd[1460]: time="2025-09-05T00:06:57.729033320Z" level=error msg="StopPodSandbox for \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\" failed" error="failed to destroy network for sandbox \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.729823 kubelet[2496]: I0905 00:06:57.729810 2496 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:06:57.731232 kubelet[2496]: E0905 00:06:57.731045 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" Sep 5 00:06:57.731232 kubelet[2496]: E0905 00:06:57.731079 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c"} Sep 5 00:06:57.731232 kubelet[2496]: E0905 00:06:57.731103 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5849096c-759f-41e6-9d07-d3b4efa485b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:57.731232 kubelet[2496]: E0905 00:06:57.731123 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5849096c-759f-41e6-9d07-d3b4efa485b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f6b95c467-pn4h6" podUID="5849096c-759f-41e6-9d07-d3b4efa485b6" Sep 5 00:06:57.732074 containerd[1460]: time="2025-09-05T00:06:57.731967167Z" level=info msg="StopPodSandbox for \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\"" Sep 5 00:06:57.732690 containerd[1460]: time="2025-09-05T00:06:57.732436217Z" level=info msg="Ensure that sandbox b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472 in task-service has been cleanup successfully" Sep 5 00:06:57.746058 containerd[1460]: time="2025-09-05T00:06:57.746011198Z" level=error msg="StopPodSandbox for \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\" failed" error="failed to destroy network for sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.746886 kubelet[2496]: E0905 00:06:57.746608 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:06:57.746886 kubelet[2496]: E0905 00:06:57.746687 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28"} Sep 5 00:06:57.746886 kubelet[2496]: E0905 00:06:57.746723 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"870d18b6-80da-4049-85fc-c52675a83993\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:57.746886 kubelet[2496]: E0905 00:06:57.746766 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"870d18b6-80da-4049-85fc-c52675a83993\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f6b95c467-9vf8n" podUID="870d18b6-80da-4049-85fc-c52675a83993" Sep 5 00:06:57.762076 containerd[1460]: time="2025-09-05T00:06:57.762017354Z" level=error msg="StopPodSandbox for \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\" failed" error="failed to destroy network for sandbox \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.762990 containerd[1460]: time="2025-09-05T00:06:57.762956567Z" level=error msg="StopPodSandbox for \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\" failed" error="failed to destroy network for sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.765781 kubelet[2496]: E0905 00:06:57.765729 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:06:57.765781 kubelet[2496]: E0905 00:06:57.765775 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd"} Sep 5 00:06:57.765909 kubelet[2496]: E0905 00:06:57.765806 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"81442ceb-28f7-40f4-af38-c4787936e8ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:57.765909 kubelet[2496]: E0905 00:06:57.765855 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"81442ceb-28f7-40f4-af38-c4787936e8ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-78zh5" podUID="81442ceb-28f7-40f4-af38-c4787936e8ed" Sep 5 00:06:57.765909 kubelet[2496]: E0905 00:06:57.765885 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" Sep 5 00:06:57.765909 kubelet[2496]: E0905 00:06:57.765902 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90"} Sep 5 00:06:57.767267 kubelet[2496]: E0905 00:06:57.765918 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39d8bbc4-53e3-418e-a624-98460b3ce6db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:57.767267 kubelet[2496]: E0905 00:06:57.765938 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39d8bbc4-53e3-418e-a624-98460b3ce6db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-6qqh7" podUID="39d8bbc4-53e3-418e-a624-98460b3ce6db" Sep 5 00:06:57.772428 containerd[1460]: time="2025-09-05T00:06:57.772376970Z" level=error msg="StopPodSandbox for \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\" failed" error="failed to destroy network for sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.772586 containerd[1460]: time="2025-09-05T00:06:57.772563149Z" level=error msg="StopPodSandbox for \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\" failed" error="failed to destroy network for sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.772882 kubelet[2496]: E0905 00:06:57.772843 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:06:57.773017 kubelet[2496]: E0905 00:06:57.772991 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df"} Sep 5 00:06:57.773099 kubelet[2496]: E0905 00:06:57.773068 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:57.773162 kubelet[2496]: E0905 00:06:57.773096 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dc6b5cb6f-d46dk" podUID="f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e" Sep 5 00:06:57.773162 kubelet[2496]: E0905 00:06:57.772847 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:06:57.773162 kubelet[2496]: E0905 00:06:57.773149 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457"} Sep 5 00:06:57.773252 kubelet[2496]: E0905 00:06:57.773168 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb024f3a-da3b-4d8c-8844-6229ed3989be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:57.773252 kubelet[2496]: E0905 00:06:57.773209 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb024f3a-da3b-4d8c-8844-6229ed3989be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-9mdwq" podUID="cb024f3a-da3b-4d8c-8844-6229ed3989be" Sep 5 00:06:57.781342 containerd[1460]: time="2025-09-05T00:06:57.781288016Z" level=error msg="StopPodSandbox for \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\" failed" error="failed to destroy network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:57.781557 kubelet[2496]: E0905 00:06:57.781507 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:06:57.781635 kubelet[2496]: E0905 00:06:57.781560 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472"} Sep 5 00:06:57.781635 kubelet[2496]: E0905 00:06:57.781590 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"294d5db6-d083-4e8f-8a08-06a5c0441597\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:57.781635 kubelet[2496]: E0905 00:06:57.781612 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"294d5db6-d083-4e8f-8a08-06a5c0441597\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55d99f4944-556sk" podUID="294d5db6-d083-4e8f-8a08-06a5c0441597" Sep 5 00:06:58.498955 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c-shm.mount: Deactivated successfully. Sep 5 00:07:00.082784 systemd[1]: Started sshd@8-10.0.0.21:22-10.0.0.1:49286.service - OpenSSH per-connection server daemon (10.0.0.1:49286). Sep 5 00:07:00.121350 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 49286 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:00.122945 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:00.127257 systemd-logind[1438]: New session 9 of user core. Sep 5 00:07:00.138984 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:07:00.232292 kubelet[2496]: I0905 00:07:00.232244 2496 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:07:00.232671 kubelet[2496]: E0905 00:07:00.232623 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:00.260682 sshd[3771]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:00.265282 systemd[1]: sshd@8-10.0.0.21:22-10.0.0.1:49286.service: Deactivated successfully. Sep 5 00:07:00.265641 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:07:00.269890 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:07:00.273639 systemd-logind[1438]: Removed session 9. Sep 5 00:07:00.735713 kubelet[2496]: E0905 00:07:00.735655 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:05.280313 systemd[1]: Started sshd@9-10.0.0.21:22-10.0.0.1:49298.service - OpenSSH per-connection server daemon (10.0.0.1:49298). Sep 5 00:07:05.333000 sshd[3792]: Accepted publickey for core from 10.0.0.1 port 49298 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:05.334580 sshd[3792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:05.341656 systemd-logind[1438]: New session 10 of user core. Sep 5 00:07:05.347999 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:07:05.607053 sshd[3792]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:05.617194 systemd[1]: sshd@9-10.0.0.21:22-10.0.0.1:49298.service: Deactivated successfully. Sep 5 00:07:05.619236 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:07:05.621289 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:07:05.632291 systemd[1]: Started sshd@10-10.0.0.21:22-10.0.0.1:49302.service - OpenSSH per-connection server daemon (10.0.0.1:49302). Sep 5 00:07:05.632916 systemd-logind[1438]: Removed session 10. Sep 5 00:07:05.660770 sshd[3809]: Accepted publickey for core from 10.0.0.1 port 49302 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:05.662474 sshd[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:05.667174 systemd-logind[1438]: New session 11 of user core. Sep 5 00:07:05.674287 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:07:05.915787 sshd[3809]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:05.925300 systemd[1]: sshd@10-10.0.0.21:22-10.0.0.1:49302.service: Deactivated successfully. Sep 5 00:07:05.927558 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:07:05.929175 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:07:05.935573 systemd[1]: Started sshd@11-10.0.0.21:22-10.0.0.1:49314.service - OpenSSH per-connection server daemon (10.0.0.1:49314). Sep 5 00:07:05.937002 systemd-logind[1438]: Removed session 11. Sep 5 00:07:05.970140 sshd[3821]: Accepted publickey for core from 10.0.0.1 port 49314 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:05.972621 sshd[3821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:05.977110 systemd-logind[1438]: New session 12 of user core. Sep 5 00:07:05.982954 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:07:06.222027 sshd[3821]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:06.226008 systemd[1]: sshd@11-10.0.0.21:22-10.0.0.1:49314.service: Deactivated successfully. Sep 5 00:07:06.229952 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:07:06.230680 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:07:06.231691 systemd-logind[1438]: Removed session 12. Sep 5 00:07:06.355667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1170443411.mount: Deactivated successfully. Sep 5 00:07:07.787915 containerd[1460]: time="2025-09-05T00:07:07.787816590Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:07.790855 containerd[1460]: time="2025-09-05T00:07:07.790790780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 5 00:07:07.794607 containerd[1460]: time="2025-09-05T00:07:07.794571793Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:07.798218 containerd[1460]: time="2025-09-05T00:07:07.798186805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:07.798764 containerd[1460]: time="2025-09-05T00:07:07.798721319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.093095918s" Sep 5 00:07:07.798764 containerd[1460]: time="2025-09-05T00:07:07.798754791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 5 00:07:07.808683 containerd[1460]: time="2025-09-05T00:07:07.808649084Z" level=info msg="CreateContainer within sandbox \"754ce91e15f7691620e01ee8ec83c9314f4b7984046562704c94a837cc27e4be\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:07:07.868316 containerd[1460]: time="2025-09-05T00:07:07.868267033Z" level=info msg="CreateContainer within sandbox \"754ce91e15f7691620e01ee8ec83c9314f4b7984046562704c94a837cc27e4be\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cca2c8d304e3e6698868f115cbb396f74434ef987b0824c5f591f794565b0ca7\"" Sep 5 00:07:07.868952 containerd[1460]: time="2025-09-05T00:07:07.868916151Z" level=info msg="StartContainer for \"cca2c8d304e3e6698868f115cbb396f74434ef987b0824c5f591f794565b0ca7\"" Sep 5 00:07:07.927158 systemd[1]: Started cri-containerd-cca2c8d304e3e6698868f115cbb396f74434ef987b0824c5f591f794565b0ca7.scope - libcontainer container cca2c8d304e3e6698868f115cbb396f74434ef987b0824c5f591f794565b0ca7. Sep 5 00:07:08.260283 containerd[1460]: time="2025-09-05T00:07:08.260223082Z" level=info msg="StartContainer for \"cca2c8d304e3e6698868f115cbb396f74434ef987b0824c5f591f794565b0ca7\" returns successfully" Sep 5 00:07:08.273049 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:07:08.273674 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:07:08.578708 containerd[1460]: time="2025-09-05T00:07:08.578570089Z" level=info msg="StopPodSandbox for \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\"" Sep 5 00:07:08.579337 containerd[1460]: time="2025-09-05T00:07:08.579286704Z" level=info msg="StopPodSandbox for \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\"" Sep 5 00:07:08.614623 containerd[1460]: time="2025-09-05T00:07:08.614556260Z" level=error msg="StopPodSandbox for \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\" failed" error="failed to destroy network for sandbox \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:07:08.614746 containerd[1460]: time="2025-09-05T00:07:08.614573042Z" level=error msg="StopPodSandbox for \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\" failed" error="failed to destroy network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:07:08.614915 kubelet[2496]: E0905 00:07:08.614852 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:08.615289 kubelet[2496]: E0905 00:07:08.614940 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472"} Sep 5 00:07:08.615289 kubelet[2496]: E0905 00:07:08.614852 2496 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" Sep 5 00:07:08.615289 kubelet[2496]: E0905 00:07:08.615010 2496 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c"} Sep 5 00:07:08.615289 kubelet[2496]: E0905 00:07:08.615039 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5849096c-759f-41e6-9d07-d3b4efa485b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:07:08.615289 kubelet[2496]: E0905 00:07:08.614986 2496 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"294d5db6-d083-4e8f-8a08-06a5c0441597\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:07:08.615471 kubelet[2496]: E0905 00:07:08.615071 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5849096c-759f-41e6-9d07-d3b4efa485b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f6b95c467-pn4h6" podUID="5849096c-759f-41e6-9d07-d3b4efa485b6" Sep 5 00:07:08.615471 kubelet[2496]: E0905 00:07:08.615104 2496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"294d5db6-d083-4e8f-8a08-06a5c0441597\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55d99f4944-556sk" podUID="294d5db6-d083-4e8f-8a08-06a5c0441597" Sep 5 00:07:08.763030 containerd[1460]: time="2025-09-05T00:07:08.762919133Z" level=info msg="StopPodSandbox for \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\"" Sep 5 00:07:08.808180 kubelet[2496]: I0905 00:07:08.808112 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v7t2b" podStartSLOduration=2.788667701 podStartE2EDuration="30.808089844s" podCreationTimestamp="2025-09-05 00:06:38 +0000 UTC" firstStartedPulling="2025-09-05 00:06:39.780002576 +0000 UTC m=+25.303500150" lastFinishedPulling="2025-09-05 00:07:07.799424718 +0000 UTC m=+53.322922293" observedRunningTime="2025-09-05 00:07:08.807702588 +0000 UTC m=+54.331200172" watchObservedRunningTime="2025-09-05 00:07:08.808089844 +0000 UTC m=+54.331587418" Sep 5 00:07:08.911711 systemd[1]: run-containerd-runc-k8s.io-cca2c8d304e3e6698868f115cbb396f74434ef987b0824c5f591f794565b0ca7-runc.7h2mzv.mount: Deactivated successfully. Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.849 [INFO][3937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.850 [INFO][3937] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" iface="eth0" netns="/var/run/netns/cni-c5cd12ac-1c01-6a41-69cc-c9681428ed53" Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.851 [INFO][3937] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" iface="eth0" netns="/var/run/netns/cni-c5cd12ac-1c01-6a41-69cc-c9681428ed53" Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.852 [INFO][3937] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" iface="eth0" netns="/var/run/netns/cni-c5cd12ac-1c01-6a41-69cc-c9681428ed53" Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.852 [INFO][3937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.852 [INFO][3937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.962 [INFO][3947] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" HandleID="k8s-pod-network.b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Workload="localhost-k8s-whisker--55d99f4944--556sk-eth0" Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.963 [INFO][3947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.963 [INFO][3947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.970 [WARNING][3947] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" HandleID="k8s-pod-network.b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Workload="localhost-k8s-whisker--55d99f4944--556sk-eth0" Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.970 [INFO][3947] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" HandleID="k8s-pod-network.b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Workload="localhost-k8s-whisker--55d99f4944--556sk-eth0" Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.972 [INFO][3947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:08.978669 containerd[1460]: 2025-09-05 00:07:08.975 [INFO][3937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:08.980025 containerd[1460]: time="2025-09-05T00:07:08.979990925Z" level=info msg="TearDown network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\" successfully" Sep 5 00:07:08.980079 containerd[1460]: time="2025-09-05T00:07:08.980024619Z" level=info msg="StopPodSandbox for \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\" returns successfully" Sep 5 00:07:08.982715 systemd[1]: run-netns-cni\x2dc5cd12ac\x2d1c01\x2d6a41\x2d69cc\x2dc9681428ed53.mount: Deactivated successfully. Sep 5 00:07:09.036947 kubelet[2496]: I0905 00:07:09.036895 2496 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/294d5db6-d083-4e8f-8a08-06a5c0441597-whisker-ca-bundle\") pod \"294d5db6-d083-4e8f-8a08-06a5c0441597\" (UID: \"294d5db6-d083-4e8f-8a08-06a5c0441597\") " Sep 5 00:07:09.036947 kubelet[2496]: I0905 00:07:09.036952 2496 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtjmq\" (UniqueName: \"kubernetes.io/projected/294d5db6-d083-4e8f-8a08-06a5c0441597-kube-api-access-gtjmq\") pod \"294d5db6-d083-4e8f-8a08-06a5c0441597\" (UID: \"294d5db6-d083-4e8f-8a08-06a5c0441597\") " Sep 5 00:07:09.037152 kubelet[2496]: I0905 00:07:09.036987 2496 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/294d5db6-d083-4e8f-8a08-06a5c0441597-whisker-backend-key-pair\") pod \"294d5db6-d083-4e8f-8a08-06a5c0441597\" (UID: \"294d5db6-d083-4e8f-8a08-06a5c0441597\") " Sep 5 00:07:09.037480 kubelet[2496]: I0905 00:07:09.037427 2496 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/294d5db6-d083-4e8f-8a08-06a5c0441597-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "294d5db6-d083-4e8f-8a08-06a5c0441597" (UID: "294d5db6-d083-4e8f-8a08-06a5c0441597"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 00:07:09.040643 kubelet[2496]: I0905 00:07:09.040617 2496 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/294d5db6-d083-4e8f-8a08-06a5c0441597-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "294d5db6-d083-4e8f-8a08-06a5c0441597" (UID: "294d5db6-d083-4e8f-8a08-06a5c0441597"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 00:07:09.040984 kubelet[2496]: I0905 00:07:09.040950 2496 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/294d5db6-d083-4e8f-8a08-06a5c0441597-kube-api-access-gtjmq" (OuterVolumeSpecName: "kube-api-access-gtjmq") pod "294d5db6-d083-4e8f-8a08-06a5c0441597" (UID: "294d5db6-d083-4e8f-8a08-06a5c0441597"). InnerVolumeSpecName "kube-api-access-gtjmq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 00:07:09.042543 systemd[1]: var-lib-kubelet-pods-294d5db6\x2dd083\x2d4e8f\x2d8a08\x2d06a5c0441597-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgtjmq.mount: Deactivated successfully. Sep 5 00:07:09.042684 systemd[1]: var-lib-kubelet-pods-294d5db6\x2dd083\x2d4e8f\x2d8a08\x2d06a5c0441597-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:07:09.137603 kubelet[2496]: I0905 00:07:09.137566 2496 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/294d5db6-d083-4e8f-8a08-06a5c0441597-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:09.137603 kubelet[2496]: I0905 00:07:09.137591 2496 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/294d5db6-d083-4e8f-8a08-06a5c0441597-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:09.137603 kubelet[2496]: I0905 00:07:09.137600 2496 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gtjmq\" (UniqueName: \"kubernetes.io/projected/294d5db6-d083-4e8f-8a08-06a5c0441597-kube-api-access-gtjmq\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:09.578445 containerd[1460]: time="2025-09-05T00:07:09.578386425Z" level=info msg="StopPodSandbox for \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\"" Sep 5 00:07:09.578445 containerd[1460]: time="2025-09-05T00:07:09.578422913Z" level=info msg="StopPodSandbox for \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\"" Sep 5 00:07:09.578626 containerd[1460]: time="2025-09-05T00:07:09.578385663Z" level=info msg="StopPodSandbox for \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\"" Sep 5 00:07:09.580231 containerd[1460]: time="2025-09-05T00:07:09.579657329Z" level=info msg="StopPodSandbox for \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\"" Sep 5 00:07:09.785961 systemd[1]: Removed slice kubepods-besteffort-pod294d5db6_d083_4e8f_8a08_06a5c0441597.slice - libcontainer container kubepods-besteffort-pod294d5db6_d083_4e8f_8a08_06a5c0441597.slice. Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.669 [INFO][4037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.669 [INFO][4037] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" iface="eth0" netns="/var/run/netns/cni-5ec1ffc4-957f-7466-6e86-7cd62d0e06de" Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.669 [INFO][4037] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" iface="eth0" netns="/var/run/netns/cni-5ec1ffc4-957f-7466-6e86-7cd62d0e06de" Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.669 [INFO][4037] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" iface="eth0" netns="/var/run/netns/cni-5ec1ffc4-957f-7466-6e86-7cd62d0e06de" Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.670 [INFO][4037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.670 [INFO][4037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.761 [INFO][4065] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" HandleID="k8s-pod-network.c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.762 [INFO][4065] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.762 [INFO][4065] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.775 [WARNING][4065] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" HandleID="k8s-pod-network.c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.775 [INFO][4065] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" HandleID="k8s-pod-network.c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.790 [INFO][4065] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:09.809178 containerd[1460]: 2025-09-05 00:07:09.798 [INFO][4037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:09.813689 containerd[1460]: time="2025-09-05T00:07:09.813639290Z" level=info msg="TearDown network for sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\" successfully" Sep 5 00:07:09.813689 containerd[1460]: time="2025-09-05T00:07:09.813688232Z" level=info msg="StopPodSandbox for \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\" returns successfully" Sep 5 00:07:09.820042 containerd[1460]: time="2025-09-05T00:07:09.817582047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-66gx6,Uid:75204577-0370-4fa9-a0b9-0d372c897a36,Namespace:calico-system,Attempt:1,}" Sep 5 00:07:09.819754 systemd[1]: run-netns-cni\x2d5ec1ffc4\x2d957f\x2d7466\x2d6e86\x2d7cd62d0e06de.mount: Deactivated successfully. Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.703 [INFO][4035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.704 [INFO][4035] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" iface="eth0" netns="/var/run/netns/cni-5ad23bb6-f717-6b00-5d3a-b942c14c350b" Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.706 [INFO][4035] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" iface="eth0" netns="/var/run/netns/cni-5ad23bb6-f717-6b00-5d3a-b942c14c350b" Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.708 [INFO][4035] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" iface="eth0" netns="/var/run/netns/cni-5ad23bb6-f717-6b00-5d3a-b942c14c350b" Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.708 [INFO][4035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.708 [INFO][4035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.769 [INFO][4078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" HandleID="k8s-pod-network.bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.770 [INFO][4078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.791 [INFO][4078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.825 [WARNING][4078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" HandleID="k8s-pod-network.bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.825 [INFO][4078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" HandleID="k8s-pod-network.bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.835 [INFO][4078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:09.879965 containerd[1460]: 2025-09-05 00:07:09.857 [INFO][4035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:09.888908 containerd[1460]: time="2025-09-05T00:07:09.883430877Z" level=info msg="TearDown network for sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\" successfully" Sep 5 00:07:09.888908 containerd[1460]: time="2025-09-05T00:07:09.883489496Z" level=info msg="StopPodSandbox for \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\" returns successfully" Sep 5 00:07:09.884093 systemd[1]: run-netns-cni\x2d5ad23bb6\x2df717\x2d6b00\x2d5d3a\x2db942c14c350b.mount: Deactivated successfully. Sep 5 00:07:09.894248 containerd[1460]: time="2025-09-05T00:07:09.893363360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc6b5cb6f-d46dk,Uid:f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e,Namespace:calico-system,Attempt:1,}" Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.692 [INFO][4036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.693 [INFO][4036] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" iface="eth0" netns="/var/run/netns/cni-a487af94-1ad6-2f48-569c-f2b40f4d5cc3" Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.694 [INFO][4036] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" iface="eth0" netns="/var/run/netns/cni-a487af94-1ad6-2f48-569c-f2b40f4d5cc3" Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.694 [INFO][4036] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" iface="eth0" netns="/var/run/netns/cni-a487af94-1ad6-2f48-569c-f2b40f4d5cc3" Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.694 [INFO][4036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.694 [INFO][4036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.769 [INFO][4071] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" HandleID="k8s-pod-network.f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.770 [INFO][4071] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.835 [INFO][4071] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.852 [WARNING][4071] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" HandleID="k8s-pod-network.f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.852 [INFO][4071] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" HandleID="k8s-pod-network.f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.857 [INFO][4071] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:09.914440 containerd[1460]: 2025-09-05 00:07:09.881 [INFO][4036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:09.914440 containerd[1460]: time="2025-09-05T00:07:09.914092206Z" level=info msg="TearDown network for sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\" successfully" Sep 5 00:07:09.914440 containerd[1460]: time="2025-09-05T00:07:09.914148392Z" level=info msg="StopPodSandbox for \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\" returns successfully" Sep 5 00:07:09.912416 systemd[1]: run-netns-cni\x2da487af94\x2d1ad6\x2d2f48\x2d569c\x2df2b40f4d5cc3.mount: Deactivated successfully. Sep 5 00:07:09.922617 containerd[1460]: time="2025-09-05T00:07:09.922178956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9mdwq,Uid:cb024f3a-da3b-4d8c-8844-6229ed3989be,Namespace:calico-system,Attempt:1,}" Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.775 [INFO][4046] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.775 [INFO][4046] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" iface="eth0" netns="/var/run/netns/cni-9e42ccf6-0fc4-17e9-8ff1-b7bd04e98a8f" Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.776 [INFO][4046] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" iface="eth0" netns="/var/run/netns/cni-9e42ccf6-0fc4-17e9-8ff1-b7bd04e98a8f" Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.776 [INFO][4046] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" iface="eth0" netns="/var/run/netns/cni-9e42ccf6-0fc4-17e9-8ff1-b7bd04e98a8f" Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.776 [INFO][4046] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.776 [INFO][4046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.899 [INFO][4091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" HandleID="k8s-pod-network.da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.899 [INFO][4091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.899 [INFO][4091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.986 [WARNING][4091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" HandleID="k8s-pod-network.da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.986 [INFO][4091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" HandleID="k8s-pod-network.da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:09.994 [INFO][4091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:10.017965 containerd[1460]: 2025-09-05 00:07:10.007 [INFO][4046] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:10.025068 containerd[1460]: time="2025-09-05T00:07:10.019970628Z" level=info msg="TearDown network for sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\" successfully" Sep 5 00:07:10.025068 containerd[1460]: time="2025-09-05T00:07:10.020018217Z" level=info msg="StopPodSandbox for \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\" returns successfully" Sep 5 00:07:10.025068 containerd[1460]: time="2025-09-05T00:07:10.020707661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6b95c467-9vf8n,Uid:870d18b6-80da-4049-85fc-c52675a83993,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:07:10.591810 containerd[1460]: time="2025-09-05T00:07:10.590628207Z" level=info msg="StopPodSandbox for \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\"" Sep 5 00:07:10.596672 kubelet[2496]: I0905 00:07:10.596372 2496 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="294d5db6-d083-4e8f-8a08-06a5c0441597" path="/var/lib/kubelet/pods/294d5db6-d083-4e8f-8a08-06a5c0441597/volumes" Sep 5 00:07:10.810388 systemd[1]: run-netns-cni\x2d9e42ccf6\x2d0fc4\x2d17e9\x2d8ff1\x2db7bd04e98a8f.mount: Deactivated successfully. Sep 5 00:07:11.273907 systemd[1]: Started sshd@12-10.0.0.21:22-10.0.0.1:35338.service - OpenSSH per-connection server daemon (10.0.0.1:35338). Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.220 [INFO][4234] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.228 [INFO][4234] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" iface="eth0" netns="/var/run/netns/cni-c10970e6-e500-e08b-a7cd-e1d053b33a38" Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.235 [INFO][4234] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" iface="eth0" netns="/var/run/netns/cni-c10970e6-e500-e08b-a7cd-e1d053b33a38" Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.239 [INFO][4234] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" iface="eth0" netns="/var/run/netns/cni-c10970e6-e500-e08b-a7cd-e1d053b33a38" Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.239 [INFO][4234] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.239 [INFO][4234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.450 [INFO][4266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" HandleID="k8s-pod-network.1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.450 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.450 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.488 [WARNING][4266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" HandleID="k8s-pod-network.1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.488 [INFO][4266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" HandleID="k8s-pod-network.1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.513 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:11.536894 containerd[1460]: 2025-09-05 00:07:11.525 [INFO][4234] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:11.536894 containerd[1460]: time="2025-09-05T00:07:11.533928878Z" level=info msg="TearDown network for sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\" successfully" Sep 5 00:07:11.536894 containerd[1460]: time="2025-09-05T00:07:11.533964365Z" level=info msg="StopPodSandbox for \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\" returns successfully" Sep 5 00:07:11.536894 containerd[1460]: time="2025-09-05T00:07:11.535197708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-78zh5,Uid:81442ceb-28f7-40f4-af38-c4787936e8ed,Namespace:kube-system,Attempt:1,}" Sep 5 00:07:11.537864 kubelet[2496]: E0905 00:07:11.534337 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:11.540321 systemd[1]: run-netns-cni\x2dc10970e6\x2de500\x2de08b\x2da7cd\x2de1d053b33a38.mount: Deactivated successfully. Sep 5 00:07:11.604716 kernel: bpftool[4284]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 00:07:11.612906 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 35338 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:11.616757 sshd[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:11.644005 systemd-logind[1438]: New session 13 of user core. Sep 5 00:07:11.660187 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:07:11.914390 sshd[4267]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:11.920083 systemd[1]: sshd@12-10.0.0.21:22-10.0.0.1:35338.service: Deactivated successfully. Sep 5 00:07:11.923328 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:07:11.926269 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:07:11.927715 systemd-logind[1438]: Removed session 13. Sep 5 00:07:11.980561 systemd-networkd[1373]: vxlan.calico: Link UP Sep 5 00:07:11.980573 systemd-networkd[1373]: vxlan.calico: Gained carrier Sep 5 00:07:12.104379 systemd-networkd[1373]: calib253a761550: Link UP Sep 5 00:07:12.105647 systemd-networkd[1373]: calib253a761550: Gained carrier Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.001 [INFO][4314] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--66gx6-eth0 goldmane-54d579b49d- calico-system 75204577-0370-4fa9-a0b9-0d372c897a36 1060 0 2025-09-05 00:06:37 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-66gx6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib253a761550 [] [] }} ContainerID="72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" Namespace="calico-system" Pod="goldmane-54d579b49d-66gx6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--66gx6-" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.002 [INFO][4314] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" Namespace="calico-system" Pod="goldmane-54d579b49d-66gx6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.043 [INFO][4345] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" HandleID="k8s-pod-network.72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.043 [INFO][4345] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" HandleID="k8s-pod-network.72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e6d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-66gx6", "timestamp":"2025-09-05 00:07:12.043261415 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.043 [INFO][4345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.043 [INFO][4345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.044 [INFO][4345] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.051 [INFO][4345] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" host="localhost" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.060 [INFO][4345] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.065 [INFO][4345] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.067 [INFO][4345] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.069 [INFO][4345] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.069 [INFO][4345] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" host="localhost" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.070 [INFO][4345] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.080 [INFO][4345] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" host="localhost" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.093 [INFO][4345] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" host="localhost" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.093 [INFO][4345] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" host="localhost" Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.093 [INFO][4345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:12.125995 containerd[1460]: 2025-09-05 00:07:12.093 [INFO][4345] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" HandleID="k8s-pod-network.72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:12.126578 containerd[1460]: 2025-09-05 00:07:12.100 [INFO][4314] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" Namespace="calico-system" Pod="goldmane-54d579b49d-66gx6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--66gx6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"75204577-0370-4fa9-a0b9-0d372c897a36", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-66gx6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib253a761550", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:12.126578 containerd[1460]: 2025-09-05 00:07:12.100 [INFO][4314] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" Namespace="calico-system" Pod="goldmane-54d579b49d-66gx6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:12.126578 containerd[1460]: 2025-09-05 00:07:12.100 [INFO][4314] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib253a761550 ContainerID="72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" Namespace="calico-system" Pod="goldmane-54d579b49d-66gx6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:12.126578 containerd[1460]: 2025-09-05 00:07:12.106 [INFO][4314] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" Namespace="calico-system" Pod="goldmane-54d579b49d-66gx6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:12.126578 containerd[1460]: 2025-09-05 00:07:12.106 [INFO][4314] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" Namespace="calico-system" Pod="goldmane-54d579b49d-66gx6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--66gx6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"75204577-0370-4fa9-a0b9-0d372c897a36", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a", Pod:"goldmane-54d579b49d-66gx6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib253a761550", MAC:"4e:30:77:02:33:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:12.126578 containerd[1460]: 2025-09-05 00:07:12.115 [INFO][4314] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a" Namespace="calico-system" Pod="goldmane-54d579b49d-66gx6" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:12.226077 containerd[1460]: time="2025-09-05T00:07:12.224064033Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:07:12.226077 containerd[1460]: time="2025-09-05T00:07:12.224127562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:07:12.226077 containerd[1460]: time="2025-09-05T00:07:12.224139034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:12.226077 containerd[1460]: time="2025-09-05T00:07:12.224227460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:12.254013 systemd[1]: Started cri-containerd-72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a.scope - libcontainer container 72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a. Sep 5 00:07:12.271613 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:07:12.286492 systemd-networkd[1373]: cali813089a1212: Link UP Sep 5 00:07:12.288009 systemd-networkd[1373]: cali813089a1212: Gained carrier Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.182 [INFO][4370] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0 calico-kube-controllers-dc6b5cb6f- calico-system f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e 1062 0 2025-09-05 00:06:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dc6b5cb6f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-dc6b5cb6f-d46dk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali813089a1212 [] [] }} ContainerID="86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" Namespace="calico-system" Pod="calico-kube-controllers-dc6b5cb6f-d46dk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.182 [INFO][4370] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" Namespace="calico-system" Pod="calico-kube-controllers-dc6b5cb6f-d46dk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.229 [INFO][4434] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" HandleID="k8s-pod-network.86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.229 [INFO][4434] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" HandleID="k8s-pod-network.86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ac8f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-dc6b5cb6f-d46dk", "timestamp":"2025-09-05 00:07:12.229091844 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.229 [INFO][4434] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.229 [INFO][4434] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.229 [INFO][4434] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.238 [INFO][4434] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" host="localhost" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.251 [INFO][4434] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.258 [INFO][4434] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.260 [INFO][4434] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.263 [INFO][4434] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.263 [INFO][4434] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" host="localhost" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.264 [INFO][4434] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82 Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.268 [INFO][4434] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" host="localhost" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.277 [INFO][4434] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" host="localhost" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.277 [INFO][4434] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" host="localhost" Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.277 [INFO][4434] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:12.312063 containerd[1460]: 2025-09-05 00:07:12.277 [INFO][4434] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" HandleID="k8s-pod-network.86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:12.312760 containerd[1460]: 2025-09-05 00:07:12.283 [INFO][4370] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" Namespace="calico-system" Pod="calico-kube-controllers-dc6b5cb6f-d46dk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0", GenerateName:"calico-kube-controllers-dc6b5cb6f-", Namespace:"calico-system", SelfLink:"", UID:"f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc6b5cb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-dc6b5cb6f-d46dk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali813089a1212", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:12.312760 containerd[1460]: 2025-09-05 00:07:12.283 [INFO][4370] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" Namespace="calico-system" Pod="calico-kube-controllers-dc6b5cb6f-d46dk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:12.312760 containerd[1460]: 2025-09-05 00:07:12.283 [INFO][4370] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali813089a1212 ContainerID="86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" Namespace="calico-system" Pod="calico-kube-controllers-dc6b5cb6f-d46dk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:12.312760 containerd[1460]: 2025-09-05 00:07:12.287 [INFO][4370] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" Namespace="calico-system" Pod="calico-kube-controllers-dc6b5cb6f-d46dk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:12.312760 containerd[1460]: 2025-09-05 00:07:12.287 [INFO][4370] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" Namespace="calico-system" Pod="calico-kube-controllers-dc6b5cb6f-d46dk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0", GenerateName:"calico-kube-controllers-dc6b5cb6f-", Namespace:"calico-system", SelfLink:"", UID:"f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc6b5cb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82", Pod:"calico-kube-controllers-dc6b5cb6f-d46dk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali813089a1212", MAC:"76:62:8e:a4:f2:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:12.312760 containerd[1460]: 2025-09-05 00:07:12.303 [INFO][4370] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82" Namespace="calico-system" Pod="calico-kube-controllers-dc6b5cb6f-d46dk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:12.323615 containerd[1460]: time="2025-09-05T00:07:12.323555759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-66gx6,Uid:75204577-0370-4fa9-a0b9-0d372c897a36,Namespace:calico-system,Attempt:1,} returns sandbox id \"72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a\"" Sep 5 00:07:12.326589 containerd[1460]: time="2025-09-05T00:07:12.326562329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:07:12.341877 containerd[1460]: time="2025-09-05T00:07:12.341696267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:07:12.342572 containerd[1460]: time="2025-09-05T00:07:12.342464919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:07:12.342572 containerd[1460]: time="2025-09-05T00:07:12.342486529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:12.342762 containerd[1460]: time="2025-09-05T00:07:12.342722492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:12.369185 systemd[1]: Started cri-containerd-86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82.scope - libcontainer container 86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82. Sep 5 00:07:12.398158 systemd-networkd[1373]: cali261c1a57ce1: Link UP Sep 5 00:07:12.399456 systemd-networkd[1373]: cali261c1a57ce1: Gained carrier Sep 5 00:07:12.402577 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:07:12.435820 containerd[1460]: time="2025-09-05T00:07:12.435738130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc6b5cb6f-d46dk,Uid:f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e,Namespace:calico-system,Attempt:1,} returns sandbox id \"86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82\"" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.194 [INFO][4382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--9mdwq-eth0 csi-node-driver- calico-system cb024f3a-da3b-4d8c-8844-6229ed3989be 1061 0 2025-09-05 00:06:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-9mdwq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali261c1a57ce1 [] [] }} ContainerID="b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" Namespace="calico-system" Pod="csi-node-driver-9mdwq" WorkloadEndpoint="localhost-k8s-csi--node--driver--9mdwq-" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.195 [INFO][4382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" Namespace="calico-system" Pod="csi-node-driver-9mdwq" WorkloadEndpoint="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.264 [INFO][4448] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" HandleID="k8s-pod-network.b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.266 [INFO][4448] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" HandleID="k8s-pod-network.b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-9mdwq", "timestamp":"2025-09-05 00:07:12.264052223 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.267 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.277 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.277 [INFO][4448] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.339 [INFO][4448] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" host="localhost" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.349 [INFO][4448] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.359 [INFO][4448] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.361 [INFO][4448] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.363 [INFO][4448] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.363 [INFO][4448] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" host="localhost" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.365 [INFO][4448] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144 Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.368 [INFO][4448] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" host="localhost" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.379 [INFO][4448] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" host="localhost" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.379 [INFO][4448] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" host="localhost" Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.380 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:12.558887 containerd[1460]: 2025-09-05 00:07:12.380 [INFO][4448] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" HandleID="k8s-pod-network.b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:12.559942 containerd[1460]: 2025-09-05 00:07:12.389 [INFO][4382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" Namespace="calico-system" Pod="csi-node-driver-9mdwq" WorkloadEndpoint="localhost-k8s-csi--node--driver--9mdwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9mdwq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb024f3a-da3b-4d8c-8844-6229ed3989be", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-9mdwq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali261c1a57ce1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:12.559942 containerd[1460]: 2025-09-05 00:07:12.389 [INFO][4382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" Namespace="calico-system" Pod="csi-node-driver-9mdwq" WorkloadEndpoint="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:12.559942 containerd[1460]: 2025-09-05 00:07:12.389 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali261c1a57ce1 ContainerID="b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" Namespace="calico-system" Pod="csi-node-driver-9mdwq" WorkloadEndpoint="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:12.559942 containerd[1460]: 2025-09-05 00:07:12.401 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" Namespace="calico-system" Pod="csi-node-driver-9mdwq" WorkloadEndpoint="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:12.559942 containerd[1460]: 2025-09-05 00:07:12.403 [INFO][4382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" Namespace="calico-system" Pod="csi-node-driver-9mdwq" WorkloadEndpoint="localhost-k8s-csi--node--driver--9mdwq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9mdwq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb024f3a-da3b-4d8c-8844-6229ed3989be", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144", Pod:"csi-node-driver-9mdwq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali261c1a57ce1", MAC:"82:44:65:59:a6:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:12.559942 containerd[1460]: 2025-09-05 00:07:12.555 [INFO][4382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144" Namespace="calico-system" Pod="csi-node-driver-9mdwq" WorkloadEndpoint="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:12.578336 containerd[1460]: time="2025-09-05T00:07:12.578285820Z" level=info msg="StopPodSandbox for \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\"" Sep 5 00:07:12.682419 containerd[1460]: time="2025-09-05T00:07:12.681974259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:07:12.682419 containerd[1460]: time="2025-09-05T00:07:12.682106998Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:07:12.682419 containerd[1460]: time="2025-09-05T00:07:12.682124491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:12.682419 containerd[1460]: time="2025-09-05T00:07:12.682247864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:12.739971 systemd[1]: Started cri-containerd-b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144.scope - libcontainer container b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144. Sep 5 00:07:12.751671 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:07:12.763911 containerd[1460]: time="2025-09-05T00:07:12.763869062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-9mdwq,Uid:cb024f3a-da3b-4d8c-8844-6229ed3989be,Namespace:calico-system,Attempt:1,} returns sandbox id \"b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144\"" Sep 5 00:07:13.122362 systemd-networkd[1373]: calie708e281d7e: Link UP Sep 5 00:07:13.123254 systemd-networkd[1373]: calie708e281d7e: Gained carrier Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:12.218 [INFO][4397] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0 calico-apiserver-6f6b95c467- calico-apiserver 870d18b6-80da-4049-85fc-c52675a83993 1064 0 2025-09-05 00:06:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f6b95c467 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f6b95c467-9vf8n eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie708e281d7e [] [] }} ContainerID="9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-9vf8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:12.218 [INFO][4397] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-9vf8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:12.290 [INFO][4467] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" HandleID="k8s-pod-network.9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:12.291 [INFO][4467] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" HandleID="k8s-pod-network.9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000397070), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f6b95c467-9vf8n", "timestamp":"2025-09-05 00:07:12.288926242 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:12.291 [INFO][4467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:12.380 [INFO][4467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:12.380 [INFO][4467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:12.531 [INFO][4467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" host="localhost" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:12.623 [INFO][4467] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:12.637 [INFO][4467] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:12.897 [INFO][4467] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:13.022 [INFO][4467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:13.022 [INFO][4467] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" host="localhost" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:13.068 [INFO][4467] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:13.085 [INFO][4467] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" host="localhost" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:13.114 [INFO][4467] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" host="localhost" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:13.114 [INFO][4467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" host="localhost" Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:13.114 [INFO][4467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:13.251105 containerd[1460]: 2025-09-05 00:07:13.114 [INFO][4467] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" HandleID="k8s-pod-network.9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:13.252610 containerd[1460]: 2025-09-05 00:07:13.118 [INFO][4397] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-9vf8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0", GenerateName:"calico-apiserver-6f6b95c467-", Namespace:"calico-apiserver", SelfLink:"", UID:"870d18b6-80da-4049-85fc-c52675a83993", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f6b95c467", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f6b95c467-9vf8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie708e281d7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:13.252610 containerd[1460]: 2025-09-05 00:07:13.118 [INFO][4397] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-9vf8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:13.252610 containerd[1460]: 2025-09-05 00:07:13.118 [INFO][4397] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie708e281d7e ContainerID="9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-9vf8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:13.252610 containerd[1460]: 2025-09-05 00:07:13.122 [INFO][4397] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-9vf8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:13.252610 containerd[1460]: 2025-09-05 00:07:13.123 [INFO][4397] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-9vf8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0", GenerateName:"calico-apiserver-6f6b95c467-", Namespace:"calico-apiserver", SelfLink:"", UID:"870d18b6-80da-4049-85fc-c52675a83993", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f6b95c467", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d", Pod:"calico-apiserver-6f6b95c467-9vf8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie708e281d7e", MAC:"16:7c:59:87:b4:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:13.252610 containerd[1460]: 2025-09-05 00:07:13.247 [INFO][4397] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-9vf8n" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:13.294953 containerd[1460]: time="2025-09-05T00:07:13.294072555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:07:13.294953 containerd[1460]: time="2025-09-05T00:07:13.294886508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:07:13.294953 containerd[1460]: time="2025-09-05T00:07:13.294900896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:13.295277 containerd[1460]: time="2025-09-05T00:07:13.295042459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:13.324036 systemd[1]: Started cri-containerd-9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d.scope - libcontainer container 9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d. Sep 5 00:07:13.332944 systemd-networkd[1373]: calib253a761550: Gained IPv6LL Sep 5 00:07:13.337619 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:07:13.361655 containerd[1460]: time="2025-09-05T00:07:13.361611384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6b95c467-9vf8n,Uid:870d18b6-80da-4049-85fc-c52675a83993,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d\"" Sep 5 00:07:13.486854 systemd-networkd[1373]: cali90df2fd4307: Link UP Sep 5 00:07:13.487680 systemd-networkd[1373]: cali90df2fd4307: Gained carrier Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:12.245 [INFO][4424] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--78zh5-eth0 coredns-668d6bf9bc- kube-system 81442ceb-28f7-40f4-af38-c4787936e8ed 1079 0 2025-09-05 00:06:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-78zh5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali90df2fd4307 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" Namespace="kube-system" Pod="coredns-668d6bf9bc-78zh5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--78zh5-" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:12.245 [INFO][4424] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" Namespace="kube-system" Pod="coredns-668d6bf9bc-78zh5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:12.327 [INFO][4492] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" HandleID="k8s-pod-network.d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:12.327 [INFO][4492] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" HandleID="k8s-pod-network.d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139ba0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-78zh5", "timestamp":"2025-09-05 00:07:12.327542227 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:12.327 [INFO][4492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.114 [INFO][4492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.114 [INFO][4492] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.120 [INFO][4492] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" host="localhost" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.243 [INFO][4492] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.249 [INFO][4492] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.252 [INFO][4492] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.255 [INFO][4492] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.255 [INFO][4492] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" host="localhost" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.257 [INFO][4492] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989 Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.272 [INFO][4492] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" host="localhost" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.480 [INFO][4492] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" host="localhost" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.480 [INFO][4492] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" host="localhost" Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.480 [INFO][4492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:13.685043 containerd[1460]: 2025-09-05 00:07:13.481 [INFO][4492] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" HandleID="k8s-pod-network.d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:13.686106 containerd[1460]: 2025-09-05 00:07:13.484 [INFO][4424] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" Namespace="kube-system" Pod="coredns-668d6bf9bc-78zh5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--78zh5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"81442ceb-28f7-40f4-af38-c4787936e8ed", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-78zh5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90df2fd4307", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:13.686106 containerd[1460]: 2025-09-05 00:07:13.484 [INFO][4424] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" Namespace="kube-system" Pod="coredns-668d6bf9bc-78zh5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:13.686106 containerd[1460]: 2025-09-05 00:07:13.484 [INFO][4424] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali90df2fd4307 ContainerID="d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" Namespace="kube-system" Pod="coredns-668d6bf9bc-78zh5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:13.686106 containerd[1460]: 2025-09-05 00:07:13.487 [INFO][4424] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" Namespace="kube-system" Pod="coredns-668d6bf9bc-78zh5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:13.686106 containerd[1460]: 2025-09-05 00:07:13.488 [INFO][4424] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" Namespace="kube-system" Pod="coredns-668d6bf9bc-78zh5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--78zh5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"81442ceb-28f7-40f4-af38-c4787936e8ed", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989", Pod:"coredns-668d6bf9bc-78zh5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90df2fd4307", MAC:"66:19:1e:51:1f:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:13.686106 containerd[1460]: 2025-09-05 00:07:13.667 [INFO][4424] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989" Namespace="kube-system" Pod="coredns-668d6bf9bc-78zh5" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:12.897 [INFO][4620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:12.898 [INFO][4620] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" iface="eth0" netns="/var/run/netns/cni-bde0e9a7-16d8-bf94-5291-b2a5346c4cea" Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:12.898 [INFO][4620] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" iface="eth0" netns="/var/run/netns/cni-bde0e9a7-16d8-bf94-5291-b2a5346c4cea" Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:12.898 [INFO][4620] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" iface="eth0" netns="/var/run/netns/cni-bde0e9a7-16d8-bf94-5291-b2a5346c4cea" Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:12.898 [INFO][4620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:12.898 [INFO][4620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:12.997 [INFO][4670] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" HandleID="k8s-pod-network.32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" Workload="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:12.997 [INFO][4670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:13.481 [INFO][4670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:13.666 [WARNING][4670] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" HandleID="k8s-pod-network.32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" Workload="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:13.678 [INFO][4670] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" HandleID="k8s-pod-network.32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" Workload="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:13.681 [INFO][4670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:13.688740 containerd[1460]: 2025-09-05 00:07:13.685 [INFO][4620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90" Sep 5 00:07:13.689445 containerd[1460]: time="2025-09-05T00:07:13.688991647Z" level=info msg="TearDown network for sandbox \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\" successfully" Sep 5 00:07:13.689445 containerd[1460]: time="2025-09-05T00:07:13.689019261Z" level=info msg="StopPodSandbox for \"32d1dcbe5e0f0c38080004982d20b2642bc8251f049429ea4caae05c4b5dec90\" returns successfully" Sep 5 00:07:13.689512 kubelet[2496]: E0905 00:07:13.689414 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:13.690659 containerd[1460]: time="2025-09-05T00:07:13.690373387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6qqh7,Uid:39d8bbc4-53e3-418e-a624-98460b3ce6db,Namespace:kube-system,Attempt:1,}" Sep 5 00:07:13.693494 systemd[1]: run-netns-cni\x2dbde0e9a7\x2d16d8\x2dbf94\x2d5291\x2db2a5346c4cea.mount: Deactivated successfully. Sep 5 00:07:13.797452 containerd[1460]: time="2025-09-05T00:07:13.797249028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:07:13.797452 containerd[1460]: time="2025-09-05T00:07:13.797305677Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:07:13.797452 containerd[1460]: time="2025-09-05T00:07:13.797318251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:13.797452 containerd[1460]: time="2025-09-05T00:07:13.797427232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:13.823017 systemd[1]: Started cri-containerd-d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989.scope - libcontainer container d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989. Sep 5 00:07:13.839611 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:07:13.866469 containerd[1460]: time="2025-09-05T00:07:13.866421254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-78zh5,Uid:81442ceb-28f7-40f4-af38-c4787936e8ed,Namespace:kube-system,Attempt:1,} returns sandbox id \"d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989\"" Sep 5 00:07:13.869473 kubelet[2496]: E0905 00:07:13.869149 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:13.873573 containerd[1460]: time="2025-09-05T00:07:13.873537288Z" level=info msg="CreateContainer within sandbox \"d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:07:13.970145 systemd-networkd[1373]: cali813089a1212: Gained IPv6LL Sep 5 00:07:13.970566 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Sep 5 00:07:14.302529 systemd-networkd[1373]: cali09328164e07: Link UP Sep 5 00:07:14.302804 systemd-networkd[1373]: cali09328164e07: Gained carrier Sep 5 00:07:14.418036 systemd-networkd[1373]: cali261c1a57ce1: Gained IPv6LL Sep 5 00:07:14.561858 containerd[1460]: time="2025-09-05T00:07:14.561714533Z" level=info msg="StopPodSandbox for \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\"" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.171 [INFO][4771] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0 coredns-668d6bf9bc- kube-system 39d8bbc4-53e3-418e-a624-98460b3ce6db 1100 0 2025-09-05 00:06:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-6qqh7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09328164e07 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qqh7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6qqh7-" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.171 [INFO][4771] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qqh7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.194 [INFO][4787] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" HandleID="k8s-pod-network.1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" Workload="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.194 [INFO][4787] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" HandleID="k8s-pod-network.1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" Workload="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001397a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-6qqh7", "timestamp":"2025-09-05 00:07:14.194404744 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.194 [INFO][4787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.194 [INFO][4787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.194 [INFO][4787] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.201 [INFO][4787] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" host="localhost" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.205 [INFO][4787] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.209 [INFO][4787] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.211 [INFO][4787] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.214 [INFO][4787] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.214 [INFO][4787] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" host="localhost" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.216 [INFO][4787] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896 Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.238 [INFO][4787] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" host="localhost" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.293 [INFO][4787] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" host="localhost" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.293 [INFO][4787] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" host="localhost" Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.293 [INFO][4787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:14.600162 containerd[1460]: 2025-09-05 00:07:14.293 [INFO][4787] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" HandleID="k8s-pod-network.1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" Workload="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" Sep 5 00:07:14.600928 containerd[1460]: 2025-09-05 00:07:14.297 [INFO][4771] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qqh7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"39d8bbc4-53e3-418e-a624-98460b3ce6db", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-6qqh7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09328164e07", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:14.600928 containerd[1460]: 2025-09-05 00:07:14.297 [INFO][4771] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qqh7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" Sep 5 00:07:14.600928 containerd[1460]: 2025-09-05 00:07:14.297 [INFO][4771] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09328164e07 ContainerID="1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qqh7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" Sep 5 00:07:14.600928 containerd[1460]: 2025-09-05 00:07:14.302 [INFO][4771] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qqh7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" Sep 5 00:07:14.600928 containerd[1460]: 2025-09-05 00:07:14.305 [INFO][4771] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qqh7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"39d8bbc4-53e3-418e-a624-98460b3ce6db", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896", Pod:"coredns-668d6bf9bc-6qqh7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09328164e07", MAC:"8a:b5:54:49:5a:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:14.600928 containerd[1460]: 2025-09-05 00:07:14.594 [INFO][4771] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896" Namespace="kube-system" Pod="coredns-668d6bf9bc-6qqh7" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--6qqh7-eth0" Sep 5 00:07:14.652106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3385191652.mount: Deactivated successfully. Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.602 [WARNING][4809] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--78zh5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"81442ceb-28f7-40f4-af38-c4787936e8ed", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989", Pod:"coredns-668d6bf9bc-78zh5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90df2fd4307", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.606 [INFO][4809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.606 [INFO][4809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" iface="eth0" netns="" Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.607 [INFO][4809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.607 [INFO][4809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.653 [INFO][4827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" HandleID="k8s-pod-network.1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.653 [INFO][4827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.654 [INFO][4827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.658 [WARNING][4827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" HandleID="k8s-pod-network.1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.659 [INFO][4827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" HandleID="k8s-pod-network.1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.660 [INFO][4827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:14.666147 containerd[1460]: 2025-09-05 00:07:14.663 [INFO][4809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:14.666573 containerd[1460]: time="2025-09-05T00:07:14.666242587Z" level=info msg="TearDown network for sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\" successfully" Sep 5 00:07:14.666573 containerd[1460]: time="2025-09-05T00:07:14.666275761Z" level=info msg="StopPodSandbox for \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\" returns successfully" Sep 5 00:07:14.667036 containerd[1460]: time="2025-09-05T00:07:14.667003687Z" level=info msg="RemovePodSandbox for \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\"" Sep 5 00:07:14.669214 containerd[1460]: time="2025-09-05T00:07:14.669186122Z" level=info msg="Forcibly stopping sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\"" Sep 5 00:07:14.772707 containerd[1460]: time="2025-09-05T00:07:14.772269246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:07:14.772707 containerd[1460]: time="2025-09-05T00:07:14.772349922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:07:14.772707 containerd[1460]: time="2025-09-05T00:07:14.772366494Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:14.772707 containerd[1460]: time="2025-09-05T00:07:14.772454394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:14.795991 systemd[1]: Started cri-containerd-1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896.scope - libcontainer container 1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896. Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.758 [WARNING][4845] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--78zh5-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"81442ceb-28f7-40f4-af38-c4787936e8ed", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989", Pod:"coredns-668d6bf9bc-78zh5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali90df2fd4307", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.759 [INFO][4845] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.759 [INFO][4845] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" iface="eth0" netns="" Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.759 [INFO][4845] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.759 [INFO][4845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.793 [INFO][4857] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" HandleID="k8s-pod-network.1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.793 [INFO][4857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.793 [INFO][4857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.801 [WARNING][4857] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" HandleID="k8s-pod-network.1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.801 [INFO][4857] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" HandleID="k8s-pod-network.1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Workload="localhost-k8s-coredns--668d6bf9bc--78zh5-eth0" Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.803 [INFO][4857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:14.810399 containerd[1460]: 2025-09-05 00:07:14.807 [INFO][4845] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd" Sep 5 00:07:14.810399 containerd[1460]: time="2025-09-05T00:07:14.810347432Z" level=info msg="TearDown network for sandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\" successfully" Sep 5 00:07:14.814356 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:07:14.839002 containerd[1460]: time="2025-09-05T00:07:14.838955540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-6qqh7,Uid:39d8bbc4-53e3-418e-a624-98460b3ce6db,Namespace:kube-system,Attempt:1,} returns sandbox id \"1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896\"" Sep 5 00:07:14.839864 kubelet[2496]: E0905 00:07:14.839703 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:14.842477 containerd[1460]: time="2025-09-05T00:07:14.842441663Z" level=info msg="CreateContainer within sandbox \"1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:07:14.930066 systemd-networkd[1373]: cali90df2fd4307: Gained IPv6LL Sep 5 00:07:14.994028 systemd-networkd[1373]: calie708e281d7e: Gained IPv6LL Sep 5 00:07:15.028156 containerd[1460]: time="2025-09-05T00:07:15.028097219Z" level=info msg="CreateContainer within sandbox \"d4436c33917ccc562b52ef27488a8a7c52cc32cb76966861a858b8f235fb2989\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0498d3d296fcaa91063074b305f37006412f4056821cecddc3d840ec09cfef62\"" Sep 5 00:07:15.028928 containerd[1460]: time="2025-09-05T00:07:15.028813671Z" level=info msg="StartContainer for \"0498d3d296fcaa91063074b305f37006412f4056821cecddc3d840ec09cfef62\"" Sep 5 00:07:15.058088 systemd[1]: Started cri-containerd-0498d3d296fcaa91063074b305f37006412f4056821cecddc3d840ec09cfef62.scope - libcontainer container 0498d3d296fcaa91063074b305f37006412f4056821cecddc3d840ec09cfef62. Sep 5 00:07:15.086603 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount32745127.mount: Deactivated successfully. Sep 5 00:07:15.320599 containerd[1460]: time="2025-09-05T00:07:15.320552574Z" level=info msg="StartContainer for \"0498d3d296fcaa91063074b305f37006412f4056821cecddc3d840ec09cfef62\" returns successfully" Sep 5 00:07:15.379061 systemd-networkd[1373]: cali09328164e07: Gained IPv6LL Sep 5 00:07:15.417569 containerd[1460]: time="2025-09-05T00:07:15.417487293Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:07:15.417667 containerd[1460]: time="2025-09-05T00:07:15.417589591Z" level=info msg="RemovePodSandbox \"1df3371be6b32be7f7beda7908129fc6f1a5895fca80124cfe1fe862627447dd\" returns successfully" Sep 5 00:07:15.418184 containerd[1460]: time="2025-09-05T00:07:15.418147848Z" level=info msg="StopPodSandbox for \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\"" Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.451 [WARNING][4945] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0", GenerateName:"calico-kube-controllers-dc6b5cb6f-", Namespace:"calico-system", SelfLink:"", UID:"f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc6b5cb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82", Pod:"calico-kube-controllers-dc6b5cb6f-d46dk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali813089a1212", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.451 [INFO][4945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.451 [INFO][4945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" iface="eth0" netns="" Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.451 [INFO][4945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.451 [INFO][4945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.474 [INFO][4954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" HandleID="k8s-pod-network.bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.474 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.474 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.489 [WARNING][4954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" HandleID="k8s-pod-network.bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.489 [INFO][4954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" HandleID="k8s-pod-network.bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.491 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:15.496340 containerd[1460]: 2025-09-05 00:07:15.493 [INFO][4945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:15.496926 containerd[1460]: time="2025-09-05T00:07:15.496397352Z" level=info msg="TearDown network for sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\" successfully" Sep 5 00:07:15.496926 containerd[1460]: time="2025-09-05T00:07:15.496426388Z" level=info msg="StopPodSandbox for \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\" returns successfully" Sep 5 00:07:15.496999 containerd[1460]: time="2025-09-05T00:07:15.496932816Z" level=info msg="RemovePodSandbox for \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\"" Sep 5 00:07:15.496999 containerd[1460]: time="2025-09-05T00:07:15.496958595Z" level=info msg="Forcibly stopping sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\"" Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.529 [WARNING][4971] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0", GenerateName:"calico-kube-controllers-dc6b5cb6f-", Namespace:"calico-system", SelfLink:"", UID:"f01083ce-5e4f-4b16-83b8-1cdba0c5ae5e", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc6b5cb6f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82", Pod:"calico-kube-controllers-dc6b5cb6f-d46dk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali813089a1212", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.530 [INFO][4971] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.530 [INFO][4971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" iface="eth0" netns="" Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.530 [INFO][4971] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.530 [INFO][4971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.551 [INFO][4980] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" HandleID="k8s-pod-network.bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.551 [INFO][4980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.552 [INFO][4980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.558 [WARNING][4980] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" HandleID="k8s-pod-network.bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.558 [INFO][4980] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" HandleID="k8s-pod-network.bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Workload="localhost-k8s-calico--kube--controllers--dc6b5cb6f--d46dk-eth0" Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.559 [INFO][4980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:15.565295 containerd[1460]: 2025-09-05 00:07:15.562 [INFO][4971] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df" Sep 5 00:07:15.565694 containerd[1460]: time="2025-09-05T00:07:15.565353483Z" level=info msg="TearDown network for sandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\" successfully" Sep 5 00:07:15.792344 containerd[1460]: time="2025-09-05T00:07:15.792270372Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:07:15.792980 containerd[1460]: time="2025-09-05T00:07:15.792361418Z" level=info msg="RemovePodSandbox \"bfa7d13e5f4fef6d3fbb58c6602424f6e9db34d9b463063195028a5c49d967df\" returns successfully" Sep 5 00:07:15.792980 containerd[1460]: time="2025-09-05T00:07:15.792903464Z" level=info msg="StopPodSandbox for \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\"" Sep 5 00:07:15.804358 kubelet[2496]: E0905 00:07:15.804300 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:15.884618 kubelet[2496]: I0905 00:07:15.884223 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-78zh5" podStartSLOduration=55.884199122 podStartE2EDuration="55.884199122s" podCreationTimestamp="2025-09-05 00:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:07:15.883334032 +0000 UTC m=+61.406831606" watchObservedRunningTime="2025-09-05 00:07:15.884199122 +0000 UTC m=+61.407696696" Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:15.887 [WARNING][5001] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0", GenerateName:"calico-apiserver-6f6b95c467-", Namespace:"calico-apiserver", SelfLink:"", UID:"870d18b6-80da-4049-85fc-c52675a83993", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f6b95c467", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d", Pod:"calico-apiserver-6f6b95c467-9vf8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie708e281d7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:15.887 [INFO][5001] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:15.887 [INFO][5001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" iface="eth0" netns="" Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:15.887 [INFO][5001] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:15.887 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:15.905 [INFO][5010] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" HandleID="k8s-pod-network.da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:15.906 [INFO][5010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:15.906 [INFO][5010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:16.156 [WARNING][5010] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" HandleID="k8s-pod-network.da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:16.156 [INFO][5010] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" HandleID="k8s-pod-network.da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:16.201 [INFO][5010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:16.208858 containerd[1460]: 2025-09-05 00:07:16.205 [INFO][5001] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:16.209349 containerd[1460]: time="2025-09-05T00:07:16.208932862Z" level=info msg="TearDown network for sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\" successfully" Sep 5 00:07:16.209349 containerd[1460]: time="2025-09-05T00:07:16.208967389Z" level=info msg="StopPodSandbox for \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\" returns successfully" Sep 5 00:07:16.209531 containerd[1460]: time="2025-09-05T00:07:16.209497531Z" level=info msg="RemovePodSandbox for \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\"" Sep 5 00:07:16.209574 containerd[1460]: time="2025-09-05T00:07:16.209530675Z" level=info msg="Forcibly stopping sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\"" Sep 5 00:07:16.809351 kubelet[2496]: E0905 00:07:16.809315 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:16.928656 systemd[1]: Started sshd@13-10.0.0.21:22-10.0.0.1:35354.service - OpenSSH per-connection server daemon (10.0.0.1:35354). Sep 5 00:07:17.018532 sshd[5040]: Accepted publickey for core from 10.0.0.1 port 35354 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:17.020351 sshd[5040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:17.025139 systemd-logind[1438]: New session 14 of user core. Sep 5 00:07:17.036987 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:07:17.433653 sshd[5040]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:17.437919 systemd[1]: sshd@13-10.0.0.21:22-10.0.0.1:35354.service: Deactivated successfully. Sep 5 00:07:17.439698 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:07:17.440299 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:07:17.441169 systemd-logind[1438]: Removed session 14. Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.036 [WARNING][5030] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0", GenerateName:"calico-apiserver-6f6b95c467-", Namespace:"calico-apiserver", SelfLink:"", UID:"870d18b6-80da-4049-85fc-c52675a83993", ResourceVersion:"1107", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f6b95c467", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d", Pod:"calico-apiserver-6f6b95c467-9vf8n", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie708e281d7e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.037 [INFO][5030] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.037 [INFO][5030] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" iface="eth0" netns="" Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.037 [INFO][5030] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.037 [INFO][5030] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.064 [INFO][5043] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" HandleID="k8s-pod-network.da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.065 [INFO][5043] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.065 [INFO][5043] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.564 [WARNING][5043] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" HandleID="k8s-pod-network.da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.564 [INFO][5043] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" HandleID="k8s-pod-network.da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Workload="localhost-k8s-calico--apiserver--6f6b95c467--9vf8n-eth0" Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.565 [INFO][5043] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:17.570944 containerd[1460]: 2025-09-05 00:07:17.568 [INFO][5030] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28" Sep 5 00:07:17.571894 containerd[1460]: time="2025-09-05T00:07:17.570994755Z" level=info msg="TearDown network for sandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\" successfully" Sep 5 00:07:17.812778 kubelet[2496]: E0905 00:07:17.812654 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:19.002808 containerd[1460]: time="2025-09-05T00:07:19.002625922Z" level=info msg="CreateContainer within sandbox \"1a23a5c005487c393900a50d88432e843f1821a33b814f59cb91459418fe5896\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a844dbc7cba97d25461f0dfc32e39882bbd739c4bb91e3db9f6161c48d35c0bb\"" Sep 5 00:07:19.004334 containerd[1460]: time="2025-09-05T00:07:19.003973575Z" level=info msg="StartContainer for \"a844dbc7cba97d25461f0dfc32e39882bbd739c4bb91e3db9f6161c48d35c0bb\"" Sep 5 00:07:19.053091 systemd[1]: Started cri-containerd-a844dbc7cba97d25461f0dfc32e39882bbd739c4bb91e3db9f6161c48d35c0bb.scope - libcontainer container a844dbc7cba97d25461f0dfc32e39882bbd739c4bb91e3db9f6161c48d35c0bb. Sep 5 00:07:19.538529 containerd[1460]: time="2025-09-05T00:07:19.538449892Z" level=info msg="StartContainer for \"a844dbc7cba97d25461f0dfc32e39882bbd739c4bb91e3db9f6161c48d35c0bb\" returns successfully" Sep 5 00:07:19.561278 containerd[1460]: time="2025-09-05T00:07:19.561056033Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:07:19.561278 containerd[1460]: time="2025-09-05T00:07:19.561132210Z" level=info msg="RemovePodSandbox \"da55d0eb5af25b59e71bcde6954035338764c87aba5cae9c603cf958dd551d28\" returns successfully" Sep 5 00:07:19.562400 containerd[1460]: time="2025-09-05T00:07:19.562376885Z" level=info msg="StopPodSandbox for \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\"" Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.606 [WARNING][5126] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9mdwq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb024f3a-da3b-4d8c-8844-6229ed3989be", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144", Pod:"csi-node-driver-9mdwq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali261c1a57ce1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.608 [INFO][5126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.608 [INFO][5126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" iface="eth0" netns="" Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.608 [INFO][5126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.608 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.631 [INFO][5136] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" HandleID="k8s-pod-network.f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.632 [INFO][5136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.632 [INFO][5136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.640 [WARNING][5136] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" HandleID="k8s-pod-network.f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.640 [INFO][5136] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" HandleID="k8s-pod-network.f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.642 [INFO][5136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:19.648634 containerd[1460]: 2025-09-05 00:07:19.645 [INFO][5126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:19.649505 containerd[1460]: time="2025-09-05T00:07:19.648681696Z" level=info msg="TearDown network for sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\" successfully" Sep 5 00:07:19.649505 containerd[1460]: time="2025-09-05T00:07:19.648713867Z" level=info msg="StopPodSandbox for \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\" returns successfully" Sep 5 00:07:19.649505 containerd[1460]: time="2025-09-05T00:07:19.649295447Z" level=info msg="RemovePodSandbox for \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\"" Sep 5 00:07:19.649505 containerd[1460]: time="2025-09-05T00:07:19.649331025Z" level=info msg="Forcibly stopping sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\"" Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.685 [WARNING][5156] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--9mdwq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cb024f3a-da3b-4d8c-8844-6229ed3989be", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144", Pod:"csi-node-driver-9mdwq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali261c1a57ce1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.686 [INFO][5156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.686 [INFO][5156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" iface="eth0" netns="" Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.686 [INFO][5156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.686 [INFO][5156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.714 [INFO][5164] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" HandleID="k8s-pod-network.f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.714 [INFO][5164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.714 [INFO][5164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.721 [WARNING][5164] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" HandleID="k8s-pod-network.f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.721 [INFO][5164] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" HandleID="k8s-pod-network.f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Workload="localhost-k8s-csi--node--driver--9mdwq-eth0" Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.722 [INFO][5164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:19.729783 containerd[1460]: 2025-09-05 00:07:19.726 [INFO][5156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457" Sep 5 00:07:19.730254 containerd[1460]: time="2025-09-05T00:07:19.729874101Z" level=info msg="TearDown network for sandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\" successfully" Sep 5 00:07:19.734888 containerd[1460]: time="2025-09-05T00:07:19.733857856Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:07:19.734888 containerd[1460]: time="2025-09-05T00:07:19.733907031Z" level=info msg="RemovePodSandbox \"f1d5770d40512f3e43b79ed8495660ab97507526b977ecfebb0d2da7b307b457\" returns successfully" Sep 5 00:07:19.735361 containerd[1460]: time="2025-09-05T00:07:19.735337403Z" level=info msg="StopPodSandbox for \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\"" Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.769 [WARNING][5181] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" WorkloadEndpoint="localhost-k8s-whisker--55d99f4944--556sk-eth0" Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.769 [INFO][5181] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.769 [INFO][5181] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" iface="eth0" netns="" Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.769 [INFO][5181] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.769 [INFO][5181] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.792 [INFO][5190] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" HandleID="k8s-pod-network.b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Workload="localhost-k8s-whisker--55d99f4944--556sk-eth0" Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.792 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.792 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.799 [WARNING][5190] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" HandleID="k8s-pod-network.b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Workload="localhost-k8s-whisker--55d99f4944--556sk-eth0" Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.799 [INFO][5190] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" HandleID="k8s-pod-network.b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Workload="localhost-k8s-whisker--55d99f4944--556sk-eth0" Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.800 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:19.806550 containerd[1460]: 2025-09-05 00:07:19.803 [INFO][5181] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:19.807353 containerd[1460]: time="2025-09-05T00:07:19.807321873Z" level=info msg="TearDown network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\" successfully" Sep 5 00:07:19.807510 containerd[1460]: time="2025-09-05T00:07:19.807444168Z" level=info msg="StopPodSandbox for \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\" returns successfully" Sep 5 00:07:19.808226 containerd[1460]: time="2025-09-05T00:07:19.808205503Z" level=info msg="RemovePodSandbox for \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\"" Sep 5 00:07:19.808268 containerd[1460]: time="2025-09-05T00:07:19.808233968Z" level=info msg="Forcibly stopping sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\"" Sep 5 00:07:19.819738 kubelet[2496]: E0905 00:07:19.819608 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:19.840195 kubelet[2496]: I0905 00:07:19.839936 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-6qqh7" podStartSLOduration=59.839912563 podStartE2EDuration="59.839912563s" podCreationTimestamp="2025-09-05 00:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:07:19.838867613 +0000 UTC m=+65.362365187" watchObservedRunningTime="2025-09-05 00:07:19.839912563 +0000 UTC m=+65.363410168" Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.849 [WARNING][5207] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" WorkloadEndpoint="localhost-k8s-whisker--55d99f4944--556sk-eth0" Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.849 [INFO][5207] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.849 [INFO][5207] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" iface="eth0" netns="" Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.849 [INFO][5207] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.849 [INFO][5207] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.881 [INFO][5218] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" HandleID="k8s-pod-network.b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Workload="localhost-k8s-whisker--55d99f4944--556sk-eth0" Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.881 [INFO][5218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.881 [INFO][5218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.887 [WARNING][5218] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" HandleID="k8s-pod-network.b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Workload="localhost-k8s-whisker--55d99f4944--556sk-eth0" Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.887 [INFO][5218] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" HandleID="k8s-pod-network.b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Workload="localhost-k8s-whisker--55d99f4944--556sk-eth0" Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.889 [INFO][5218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:19.895294 containerd[1460]: 2025-09-05 00:07:19.892 [INFO][5207] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472" Sep 5 00:07:19.895662 containerd[1460]: time="2025-09-05T00:07:19.895333217Z" level=info msg="TearDown network for sandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\" successfully" Sep 5 00:07:19.900482 containerd[1460]: time="2025-09-05T00:07:19.900443770Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:07:19.900566 containerd[1460]: time="2025-09-05T00:07:19.900512301Z" level=info msg="RemovePodSandbox \"b24b25a0d5f636d740d20e53f890fa1071dbb8cc8ba1850f2ac1fe496414d472\" returns successfully" Sep 5 00:07:19.901218 containerd[1460]: time="2025-09-05T00:07:19.900974321Z" level=info msg="StopPodSandbox for \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\"" Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.945 [WARNING][5238] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--66gx6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"75204577-0370-4fa9-a0b9-0d372c897a36", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a", Pod:"goldmane-54d579b49d-66gx6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib253a761550", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.946 [INFO][5238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.946 [INFO][5238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" iface="eth0" netns="" Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.946 [INFO][5238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.946 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.974 [INFO][5249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" HandleID="k8s-pod-network.c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.974 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.974 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.981 [WARNING][5249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" HandleID="k8s-pod-network.c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.981 [INFO][5249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" HandleID="k8s-pod-network.c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.982 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:19.990323 containerd[1460]: 2025-09-05 00:07:19.987 [INFO][5238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:19.990962 containerd[1460]: time="2025-09-05T00:07:19.990378164Z" level=info msg="TearDown network for sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\" successfully" Sep 5 00:07:19.990962 containerd[1460]: time="2025-09-05T00:07:19.990409194Z" level=info msg="StopPodSandbox for \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\" returns successfully" Sep 5 00:07:19.991125 containerd[1460]: time="2025-09-05T00:07:19.991102758Z" level=info msg="RemovePodSandbox for \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\"" Sep 5 00:07:19.991156 containerd[1460]: time="2025-09-05T00:07:19.991134819Z" level=info msg="Forcibly stopping sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\"" Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.026 [WARNING][5267] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--66gx6-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"75204577-0370-4fa9-a0b9-0d372c897a36", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a", Pod:"goldmane-54d579b49d-66gx6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib253a761550", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.027 [INFO][5267] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.027 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" iface="eth0" netns="" Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.027 [INFO][5267] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.027 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.055 [INFO][5276] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" HandleID="k8s-pod-network.c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.055 [INFO][5276] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.055 [INFO][5276] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.061 [WARNING][5276] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" HandleID="k8s-pod-network.c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.061 [INFO][5276] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" HandleID="k8s-pod-network.c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Workload="localhost-k8s-goldmane--54d579b49d--66gx6-eth0" Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.065 [INFO][5276] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:20.071963 containerd[1460]: 2025-09-05 00:07:20.069 [INFO][5267] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48" Sep 5 00:07:20.071963 containerd[1460]: time="2025-09-05T00:07:20.071925819Z" level=info msg="TearDown network for sandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\" successfully" Sep 5 00:07:20.075948 containerd[1460]: time="2025-09-05T00:07:20.075908866Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:07:20.076079 containerd[1460]: time="2025-09-05T00:07:20.076057922Z" level=info msg="RemovePodSandbox \"c19b9bfa0e61e7bdbf3f58815e8eae2e8ec0882b6e057474d2694e57bfd5fd48\" returns successfully" Sep 5 00:07:20.092963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2580048676.mount: Deactivated successfully. Sep 5 00:07:20.828017 kubelet[2496]: E0905 00:07:20.827959 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:21.578326 containerd[1460]: time="2025-09-05T00:07:21.578219230Z" level=info msg="StopPodSandbox for \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\"" Sep 5 00:07:21.830269 kubelet[2496]: E0905 00:07:21.830147 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:21.964 [INFO][5300] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:21.965 [INFO][5300] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" iface="eth0" netns="/var/run/netns/cni-f8f275d1-0790-f7d7-f2d5-29a4a046ec2c" Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:21.965 [INFO][5300] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" iface="eth0" netns="/var/run/netns/cni-f8f275d1-0790-f7d7-f2d5-29a4a046ec2c" Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:21.965 [INFO][5300] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" iface="eth0" netns="/var/run/netns/cni-f8f275d1-0790-f7d7-f2d5-29a4a046ec2c" Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:21.965 [INFO][5300] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:21.965 [INFO][5300] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:21.989 [INFO][5313] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" HandleID="k8s-pod-network.dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" Workload="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:21.990 [INFO][5313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:21.990 [INFO][5313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:22.031 [WARNING][5313] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" HandleID="k8s-pod-network.dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" Workload="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:22.031 [INFO][5313] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" HandleID="k8s-pod-network.dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" Workload="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:22.033 [INFO][5313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:22.038531 containerd[1460]: 2025-09-05 00:07:22.035 [INFO][5300] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c" Sep 5 00:07:22.039121 containerd[1460]: time="2025-09-05T00:07:22.038796301Z" level=info msg="TearDown network for sandbox \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\" successfully" Sep 5 00:07:22.039121 containerd[1460]: time="2025-09-05T00:07:22.038823553Z" level=info msg="StopPodSandbox for \"dd480169ec381819d36fae7bb1eaa2f8daf8e1ed20b7c95cc8d934b7c665f38c\" returns successfully" Sep 5 00:07:22.039614 containerd[1460]: time="2025-09-05T00:07:22.039568654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6b95c467-pn4h6,Uid:5849096c-759f-41e6-9d07-d3b4efa485b6,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:07:22.042495 systemd[1]: run-netns-cni\x2df8f275d1\x2d0790\x2df7d7\x2df2d5\x2d29a4a046ec2c.mount: Deactivated successfully. Sep 5 00:07:22.119512 containerd[1460]: time="2025-09-05T00:07:22.119353323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:22.283139 containerd[1460]: time="2025-09-05T00:07:22.283045618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 5 00:07:22.318335 containerd[1460]: time="2025-09-05T00:07:22.318261564Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:22.331807 containerd[1460]: time="2025-09-05T00:07:22.331719313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:22.333103 containerd[1460]: time="2025-09-05T00:07:22.333053966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 10.006260953s" Sep 5 00:07:22.333103 containerd[1460]: time="2025-09-05T00:07:22.333101107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 5 00:07:22.334518 containerd[1460]: time="2025-09-05T00:07:22.334467951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:07:22.335935 containerd[1460]: time="2025-09-05T00:07:22.335899360Z" level=info msg="CreateContainer within sandbox \"72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:07:22.367903 containerd[1460]: time="2025-09-05T00:07:22.367813295Z" level=info msg="CreateContainer within sandbox \"72cc2cd99de61e69f402d84a43fe8149ad8182bf08bcaad928833928a364810a\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b2fd385428d1976dd666e12e514aeca5e80c60bba21f65128bc0acdab3b8b403\"" Sep 5 00:07:22.368781 containerd[1460]: time="2025-09-05T00:07:22.368753931Z" level=info msg="StartContainer for \"b2fd385428d1976dd666e12e514aeca5e80c60bba21f65128bc0acdab3b8b403\"" Sep 5 00:07:22.407072 systemd[1]: Started cri-containerd-b2fd385428d1976dd666e12e514aeca5e80c60bba21f65128bc0acdab3b8b403.scope - libcontainer container b2fd385428d1976dd666e12e514aeca5e80c60bba21f65128bc0acdab3b8b403. Sep 5 00:07:22.463314 systemd[1]: Started sshd@14-10.0.0.21:22-10.0.0.1:42092.service - OpenSSH per-connection server daemon (10.0.0.1:42092). Sep 5 00:07:22.595338 containerd[1460]: time="2025-09-05T00:07:22.595277915Z" level=info msg="StartContainer for \"b2fd385428d1976dd666e12e514aeca5e80c60bba21f65128bc0acdab3b8b403\" returns successfully" Sep 5 00:07:22.600248 sshd[5369]: Accepted publickey for core from 10.0.0.1 port 42092 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:22.602403 sshd[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:22.607060 systemd-logind[1438]: New session 15 of user core. Sep 5 00:07:22.614087 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:07:22.810063 systemd-networkd[1373]: calif68e1337556: Link UP Sep 5 00:07:22.810499 systemd-networkd[1373]: calif68e1337556: Gained carrier Sep 5 00:07:22.811471 sshd[5369]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:22.818456 systemd[1]: sshd@14-10.0.0.21:22-10.0.0.1:42092.service: Deactivated successfully. Sep 5 00:07:22.820975 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:07:22.821637 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:07:22.824177 systemd-logind[1438]: Removed session 15. Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.382 [INFO][5321] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0 calico-apiserver-6f6b95c467- calico-apiserver 5849096c-759f-41e6-9d07-d3b4efa485b6 1178 0 2025-09-05 00:06:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f6b95c467 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f6b95c467-pn4h6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif68e1337556 [] [] }} ContainerID="955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-pn4h6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.382 [INFO][5321] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-pn4h6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.426 [INFO][5353] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" HandleID="k8s-pod-network.955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" Workload="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.427 [INFO][5353] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" HandleID="k8s-pod-network.955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" Workload="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd430), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6f6b95c467-pn4h6", "timestamp":"2025-09-05 00:07:22.426854356 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.427 [INFO][5353] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.427 [INFO][5353] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.427 [INFO][5353] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.439 [INFO][5353] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" host="localhost" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.447 [INFO][5353] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.457 [INFO][5353] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.465 [INFO][5353] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.600 [INFO][5353] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.600 [INFO][5353] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" host="localhost" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.654 [INFO][5353] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.694 [INFO][5353] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" host="localhost" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.801 [INFO][5353] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" host="localhost" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.801 [INFO][5353] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" host="localhost" Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.801 [INFO][5353] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:22.828161 containerd[1460]: 2025-09-05 00:07:22.801 [INFO][5353] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" HandleID="k8s-pod-network.955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" Workload="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" Sep 5 00:07:22.828736 containerd[1460]: 2025-09-05 00:07:22.806 [INFO][5321] cni-plugin/k8s.go 418: Populated endpoint ContainerID="955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-pn4h6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0", GenerateName:"calico-apiserver-6f6b95c467-", Namespace:"calico-apiserver", SelfLink:"", UID:"5849096c-759f-41e6-9d07-d3b4efa485b6", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f6b95c467", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f6b95c467-pn4h6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif68e1337556", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:22.828736 containerd[1460]: 2025-09-05 00:07:22.806 [INFO][5321] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-pn4h6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" Sep 5 00:07:22.828736 containerd[1460]: 2025-09-05 00:07:22.806 [INFO][5321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif68e1337556 ContainerID="955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-pn4h6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" Sep 5 00:07:22.828736 containerd[1460]: 2025-09-05 00:07:22.810 [INFO][5321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-pn4h6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" Sep 5 00:07:22.828736 containerd[1460]: 2025-09-05 00:07:22.811 [INFO][5321] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-pn4h6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0", GenerateName:"calico-apiserver-6f6b95c467-", Namespace:"calico-apiserver", SelfLink:"", UID:"5849096c-759f-41e6-9d07-d3b4efa485b6", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f6b95c467", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b", Pod:"calico-apiserver-6f6b95c467-pn4h6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif68e1337556", MAC:"ee:e6:55:eb:c4:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:07:22.828736 containerd[1460]: 2025-09-05 00:07:22.822 [INFO][5321] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b" Namespace="calico-apiserver" Pod="calico-apiserver-6f6b95c467-pn4h6" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f6b95c467--pn4h6-eth0" Sep 5 00:07:22.858667 kubelet[2496]: I0905 00:07:22.858562 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-66gx6" podStartSLOduration=35.850377913 podStartE2EDuration="45.858539427s" podCreationTimestamp="2025-09-05 00:06:37 +0000 UTC" firstStartedPulling="2025-09-05 00:07:12.32612581 +0000 UTC m=+57.849623384" lastFinishedPulling="2025-09-05 00:07:22.334287304 +0000 UTC m=+67.857784898" observedRunningTime="2025-09-05 00:07:22.858230985 +0000 UTC m=+68.381728559" watchObservedRunningTime="2025-09-05 00:07:22.858539427 +0000 UTC m=+68.382037001" Sep 5 00:07:22.890224 containerd[1460]: time="2025-09-05T00:07:22.890094865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:07:22.890224 containerd[1460]: time="2025-09-05T00:07:22.890177704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:07:22.890224 containerd[1460]: time="2025-09-05T00:07:22.890192162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:22.890465 containerd[1460]: time="2025-09-05T00:07:22.890284238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:07:22.911986 systemd[1]: Started cri-containerd-955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b.scope - libcontainer container 955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b. Sep 5 00:07:22.927511 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:07:22.952070 containerd[1460]: time="2025-09-05T00:07:22.952013581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f6b95c467-pn4h6,Uid:5849096c-759f-41e6-9d07-d3b4efa485b6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b\"" Sep 5 00:07:24.466083 systemd-networkd[1373]: calif68e1337556: Gained IPv6LL Sep 5 00:07:25.579941 kubelet[2496]: E0905 00:07:25.579780 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:27.828419 systemd[1]: Started sshd@15-10.0.0.21:22-10.0.0.1:42102.service - OpenSSH per-connection server daemon (10.0.0.1:42102). Sep 5 00:07:28.412712 sshd[5503]: Accepted publickey for core from 10.0.0.1 port 42102 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:28.414604 sshd[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:28.419886 systemd-logind[1438]: New session 16 of user core. Sep 5 00:07:28.428030 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:07:28.589082 sshd[5503]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:28.594081 systemd[1]: sshd@15-10.0.0.21:22-10.0.0.1:42102.service: Deactivated successfully. Sep 5 00:07:28.596385 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:07:28.597139 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:07:28.598054 systemd-logind[1438]: Removed session 16. Sep 5 00:07:29.290276 containerd[1460]: time="2025-09-05T00:07:29.290167862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:29.327311 containerd[1460]: time="2025-09-05T00:07:29.327200141Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 5 00:07:29.361018 containerd[1460]: time="2025-09-05T00:07:29.360932469Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:29.397342 containerd[1460]: time="2025-09-05T00:07:29.397278073Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:29.398083 containerd[1460]: time="2025-09-05T00:07:29.398047264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 7.063533506s" Sep 5 00:07:29.398173 containerd[1460]: time="2025-09-05T00:07:29.398087302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 5 00:07:29.399225 containerd[1460]: time="2025-09-05T00:07:29.399189108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:07:29.407416 containerd[1460]: time="2025-09-05T00:07:29.407362188Z" level=info msg="CreateContainer within sandbox \"86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:07:29.915727 containerd[1460]: time="2025-09-05T00:07:29.915643114Z" level=info msg="CreateContainer within sandbox \"86b2c511d943095db6b3e2dcbd94186b4654be3d38dc363deb37976fc1ab6e82\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"13ce80540dfc8f524a1f631204ad047ec154606a99db2f4aebed7ff8c0f00aec\"" Sep 5 00:07:29.916382 containerd[1460]: time="2025-09-05T00:07:29.916333735Z" level=info msg="StartContainer for \"13ce80540dfc8f524a1f631204ad047ec154606a99db2f4aebed7ff8c0f00aec\"" Sep 5 00:07:29.970103 systemd[1]: Started cri-containerd-13ce80540dfc8f524a1f631204ad047ec154606a99db2f4aebed7ff8c0f00aec.scope - libcontainer container 13ce80540dfc8f524a1f631204ad047ec154606a99db2f4aebed7ff8c0f00aec. Sep 5 00:07:30.019698 containerd[1460]: time="2025-09-05T00:07:30.019496270Z" level=info msg="StartContainer for \"13ce80540dfc8f524a1f631204ad047ec154606a99db2f4aebed7ff8c0f00aec\" returns successfully" Sep 5 00:07:30.959109 kubelet[2496]: I0905 00:07:30.959035 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-dc6b5cb6f-d46dk" podStartSLOduration=34.999074996 podStartE2EDuration="51.959012988s" podCreationTimestamp="2025-09-05 00:06:39 +0000 UTC" firstStartedPulling="2025-09-05 00:07:12.439060913 +0000 UTC m=+57.962558487" lastFinishedPulling="2025-09-05 00:07:29.398998885 +0000 UTC m=+74.922496479" observedRunningTime="2025-09-05 00:07:30.958523051 +0000 UTC m=+76.482020626" watchObservedRunningTime="2025-09-05 00:07:30.959012988 +0000 UTC m=+76.482510562" Sep 5 00:07:33.601132 systemd[1]: Started sshd@16-10.0.0.21:22-10.0.0.1:52072.service - OpenSSH per-connection server daemon (10.0.0.1:52072). Sep 5 00:07:33.759617 sshd[5598]: Accepted publickey for core from 10.0.0.1 port 52072 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:33.761246 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:33.765345 systemd-logind[1438]: New session 17 of user core. Sep 5 00:07:33.781054 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:07:33.977602 sshd[5598]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:33.981741 systemd[1]: sshd@16-10.0.0.21:22-10.0.0.1:52072.service: Deactivated successfully. Sep 5 00:07:33.983819 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:07:33.984469 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:07:33.985326 systemd-logind[1438]: Removed session 17. Sep 5 00:07:36.045312 containerd[1460]: time="2025-09-05T00:07:36.045248571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:36.051749 containerd[1460]: time="2025-09-05T00:07:36.051688277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 5 00:07:36.059944 containerd[1460]: time="2025-09-05T00:07:36.059909878Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:36.089859 containerd[1460]: time="2025-09-05T00:07:36.089812558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:36.090414 containerd[1460]: time="2025-09-05T00:07:36.090385200Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 6.691156697s" Sep 5 00:07:36.090496 containerd[1460]: time="2025-09-05T00:07:36.090414075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 5 00:07:36.091353 containerd[1460]: time="2025-09-05T00:07:36.091326725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:07:36.092417 containerd[1460]: time="2025-09-05T00:07:36.092388108Z" level=info msg="CreateContainer within sandbox \"b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:07:37.036266 containerd[1460]: time="2025-09-05T00:07:37.036205677Z" level=info msg="CreateContainer within sandbox \"b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6af2af25c62d6c86729e1ec835fa8acd3b6a9169a35a52f10007ec58400fb886\"" Sep 5 00:07:37.036732 containerd[1460]: time="2025-09-05T00:07:37.036702133Z" level=info msg="StartContainer for \"6af2af25c62d6c86729e1ec835fa8acd3b6a9169a35a52f10007ec58400fb886\"" Sep 5 00:07:37.073029 systemd[1]: Started cri-containerd-6af2af25c62d6c86729e1ec835fa8acd3b6a9169a35a52f10007ec58400fb886.scope - libcontainer container 6af2af25c62d6c86729e1ec835fa8acd3b6a9169a35a52f10007ec58400fb886. Sep 5 00:07:37.161790 containerd[1460]: time="2025-09-05T00:07:37.161672152Z" level=info msg="StartContainer for \"6af2af25c62d6c86729e1ec835fa8acd3b6a9169a35a52f10007ec58400fb886\" returns successfully" Sep 5 00:07:38.962768 systemd[1]: Started sshd@17-10.0.0.21:22-10.0.0.1:52082.service - OpenSSH per-connection server daemon (10.0.0.1:52082). Sep 5 00:07:39.021438 sshd[5655]: Accepted publickey for core from 10.0.0.1 port 52082 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:39.023292 sshd[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:39.029533 systemd-logind[1438]: New session 18 of user core. Sep 5 00:07:39.039044 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:07:39.226947 sshd[5655]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:39.231638 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:07:39.232233 systemd[1]: sshd@17-10.0.0.21:22-10.0.0.1:52082.service: Deactivated successfully. Sep 5 00:07:39.234461 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:07:39.235443 systemd-logind[1438]: Removed session 18. Sep 5 00:07:39.567341 containerd[1460]: time="2025-09-05T00:07:39.567218350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:39.568061 containerd[1460]: time="2025-09-05T00:07:39.568000019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 5 00:07:39.569497 containerd[1460]: time="2025-09-05T00:07:39.569463836Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:39.571797 containerd[1460]: time="2025-09-05T00:07:39.571744278Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:39.572793 containerd[1460]: time="2025-09-05T00:07:39.572759601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.481395145s" Sep 5 00:07:39.572867 containerd[1460]: time="2025-09-05T00:07:39.572850023Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:07:39.574705 containerd[1460]: time="2025-09-05T00:07:39.574685478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:07:39.575357 containerd[1460]: time="2025-09-05T00:07:39.575324374Z" level=info msg="CreateContainer within sandbox \"9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:07:39.589909 containerd[1460]: time="2025-09-05T00:07:39.589866722Z" level=info msg="CreateContainer within sandbox \"9202e313d03257ce2acce32c9cc4d66e31a8724c307b6b7cbc170cbab5f1265d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f06ff340e38ca4e79056a5ae73a87a77616f7082bee3a92e7f46d1ddd67d3b9d\"" Sep 5 00:07:39.590690 containerd[1460]: time="2025-09-05T00:07:39.590648660Z" level=info msg="StartContainer for \"f06ff340e38ca4e79056a5ae73a87a77616f7082bee3a92e7f46d1ddd67d3b9d\"" Sep 5 00:07:39.679129 systemd[1]: Started cri-containerd-f06ff340e38ca4e79056a5ae73a87a77616f7082bee3a92e7f46d1ddd67d3b9d.scope - libcontainer container f06ff340e38ca4e79056a5ae73a87a77616f7082bee3a92e7f46d1ddd67d3b9d. Sep 5 00:07:39.864938 containerd[1460]: time="2025-09-05T00:07:39.864649349Z" level=info msg="StartContainer for \"f06ff340e38ca4e79056a5ae73a87a77616f7082bee3a92e7f46d1ddd67d3b9d\" returns successfully" Sep 5 00:07:40.299628 containerd[1460]: time="2025-09-05T00:07:40.298868393Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:40.318156 containerd[1460]: time="2025-09-05T00:07:40.318090616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:07:40.323313 containerd[1460]: time="2025-09-05T00:07:40.323275595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 748.564459ms" Sep 5 00:07:40.323313 containerd[1460]: time="2025-09-05T00:07:40.323310551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:07:40.324771 containerd[1460]: time="2025-09-05T00:07:40.324610366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:07:40.326276 containerd[1460]: time="2025-09-05T00:07:40.326224807Z" level=info msg="CreateContainer within sandbox \"955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:07:40.424946 containerd[1460]: time="2025-09-05T00:07:40.424357477Z" level=info msg="CreateContainer within sandbox \"955c3a917ef20fc7a1db7abc9f7ef6ed7fd574090e0ac68afd065537ffe2fe3b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"448eb8de3265633939bfe65ae8da48b6c518fe2b02dbd32034beb7f285ef87ef\"" Sep 5 00:07:40.428398 containerd[1460]: time="2025-09-05T00:07:40.428355055Z" level=info msg="StartContainer for \"448eb8de3265633939bfe65ae8da48b6c518fe2b02dbd32034beb7f285ef87ef\"" Sep 5 00:07:40.461273 systemd[1]: Started cri-containerd-448eb8de3265633939bfe65ae8da48b6c518fe2b02dbd32034beb7f285ef87ef.scope - libcontainer container 448eb8de3265633939bfe65ae8da48b6c518fe2b02dbd32034beb7f285ef87ef. Sep 5 00:07:40.523394 containerd[1460]: time="2025-09-05T00:07:40.523247077Z" level=info msg="StartContainer for \"448eb8de3265633939bfe65ae8da48b6c518fe2b02dbd32034beb7f285ef87ef\" returns successfully" Sep 5 00:07:41.065237 kubelet[2496]: I0905 00:07:41.065129 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f6b95c467-pn4h6" podStartSLOduration=49.693348528 podStartE2EDuration="1m7.064166306s" podCreationTimestamp="2025-09-05 00:06:34 +0000 UTC" firstStartedPulling="2025-09-05 00:07:22.953202604 +0000 UTC m=+68.476700178" lastFinishedPulling="2025-09-05 00:07:40.324020382 +0000 UTC m=+85.847517956" observedRunningTime="2025-09-05 00:07:41.063969441 +0000 UTC m=+86.587467016" watchObservedRunningTime="2025-09-05 00:07:41.064166306 +0000 UTC m=+86.587663880" Sep 5 00:07:41.066087 kubelet[2496]: I0905 00:07:41.065723 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6f6b95c467-9vf8n" podStartSLOduration=40.854726839 podStartE2EDuration="1m7.06570823s" podCreationTimestamp="2025-09-05 00:06:34 +0000 UTC" firstStartedPulling="2025-09-05 00:07:13.362935022 +0000 UTC m=+58.886432596" lastFinishedPulling="2025-09-05 00:07:39.573916413 +0000 UTC m=+85.097413987" observedRunningTime="2025-09-05 00:07:40.171314758 +0000 UTC m=+85.694812342" watchObservedRunningTime="2025-09-05 00:07:41.06570823 +0000 UTC m=+86.589205814" Sep 5 00:07:43.445205 containerd[1460]: time="2025-09-05T00:07:43.445146186Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:43.446657 containerd[1460]: time="2025-09-05T00:07:43.446616140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 5 00:07:43.448229 containerd[1460]: time="2025-09-05T00:07:43.448167891Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:43.451075 containerd[1460]: time="2025-09-05T00:07:43.451038057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:07:43.452082 containerd[1460]: time="2025-09-05T00:07:43.452042948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.127403067s" Sep 5 00:07:43.452133 containerd[1460]: time="2025-09-05T00:07:43.452079668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 5 00:07:43.454066 containerd[1460]: time="2025-09-05T00:07:43.454037700Z" level=info msg="CreateContainer within sandbox \"b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:07:43.481334 containerd[1460]: time="2025-09-05T00:07:43.481283762Z" level=info msg="CreateContainer within sandbox \"b1fdc1337323fce5fb52b36be8c86f5be31be02d4d3366e5c8e63a30cfb55144\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"46c27c0d01a092f8a267658784f72930df3f1b8b18eebf1e2f881cee8cace8d3\"" Sep 5 00:07:43.482005 containerd[1460]: time="2025-09-05T00:07:43.481967052Z" level=info msg="StartContainer for \"46c27c0d01a092f8a267658784f72930df3f1b8b18eebf1e2f881cee8cace8d3\"" Sep 5 00:07:43.520059 systemd[1]: Started cri-containerd-46c27c0d01a092f8a267658784f72930df3f1b8b18eebf1e2f881cee8cace8d3.scope - libcontainer container 46c27c0d01a092f8a267658784f72930df3f1b8b18eebf1e2f881cee8cace8d3. Sep 5 00:07:43.556128 containerd[1460]: time="2025-09-05T00:07:43.556069107Z" level=info msg="StartContainer for \"46c27c0d01a092f8a267658784f72930df3f1b8b18eebf1e2f881cee8cace8d3\" returns successfully" Sep 5 00:07:43.578011 kubelet[2496]: E0905 00:07:43.577975 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:43.732085 kubelet[2496]: I0905 00:07:43.731951 2496 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:07:43.732085 kubelet[2496]: I0905 00:07:43.732011 2496 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:07:43.935392 kubelet[2496]: I0905 00:07:43.935311 2496 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-9mdwq" podStartSLOduration=35.247665682 podStartE2EDuration="1m5.93529025s" podCreationTimestamp="2025-09-05 00:06:38 +0000 UTC" firstStartedPulling="2025-09-05 00:07:12.765236045 +0000 UTC m=+58.288733619" lastFinishedPulling="2025-09-05 00:07:43.452860612 +0000 UTC m=+88.976358187" observedRunningTime="2025-09-05 00:07:43.934158479 +0000 UTC m=+89.457656073" watchObservedRunningTime="2025-09-05 00:07:43.93529025 +0000 UTC m=+89.458787824" Sep 5 00:07:44.245674 systemd[1]: Started sshd@18-10.0.0.21:22-10.0.0.1:53164.service - OpenSSH per-connection server daemon (10.0.0.1:53164). Sep 5 00:07:44.296275 sshd[5837]: Accepted publickey for core from 10.0.0.1 port 53164 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:44.298117 sshd[5837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:44.302282 systemd-logind[1438]: New session 19 of user core. Sep 5 00:07:44.308956 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:07:44.574619 sshd[5837]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:44.584422 systemd[1]: sshd@18-10.0.0.21:22-10.0.0.1:53164.service: Deactivated successfully. Sep 5 00:07:44.587045 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:07:44.588692 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:07:44.596086 systemd[1]: Started sshd@19-10.0.0.21:22-10.0.0.1:53176.service - OpenSSH per-connection server daemon (10.0.0.1:53176). Sep 5 00:07:44.597120 systemd-logind[1438]: Removed session 19. Sep 5 00:07:44.625078 sshd[5851]: Accepted publickey for core from 10.0.0.1 port 53176 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:44.627779 sshd[5851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:44.633256 systemd-logind[1438]: New session 20 of user core. Sep 5 00:07:44.648966 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:07:44.998266 sshd[5851]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:45.006855 systemd[1]: sshd@19-10.0.0.21:22-10.0.0.1:53176.service: Deactivated successfully. Sep 5 00:07:45.008681 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:07:45.011951 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:07:45.012362 systemd[1]: Started sshd@20-10.0.0.21:22-10.0.0.1:53192.service - OpenSSH per-connection server daemon (10.0.0.1:53192). Sep 5 00:07:45.013711 systemd-logind[1438]: Removed session 20. Sep 5 00:07:45.062908 sshd[5868]: Accepted publickey for core from 10.0.0.1 port 53192 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:45.064561 sshd[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:45.068792 systemd-logind[1438]: New session 21 of user core. Sep 5 00:07:45.076008 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:07:46.094293 sshd[5868]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:46.104386 systemd[1]: sshd@20-10.0.0.21:22-10.0.0.1:53192.service: Deactivated successfully. Sep 5 00:07:46.106445 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:07:46.107320 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:07:46.115167 systemd[1]: Started sshd@21-10.0.0.21:22-10.0.0.1:53206.service - OpenSSH per-connection server daemon (10.0.0.1:53206). Sep 5 00:07:46.116473 systemd-logind[1438]: Removed session 21. Sep 5 00:07:46.148771 sshd[5890]: Accepted publickey for core from 10.0.0.1 port 53206 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:46.151989 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:46.159446 systemd-logind[1438]: New session 22 of user core. Sep 5 00:07:46.167971 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:07:46.474890 sshd[5890]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:46.483370 systemd[1]: sshd@21-10.0.0.21:22-10.0.0.1:53206.service: Deactivated successfully. Sep 5 00:07:46.485900 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:07:46.487955 systemd-logind[1438]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:07:46.494376 systemd[1]: Started sshd@22-10.0.0.21:22-10.0.0.1:53216.service - OpenSSH per-connection server daemon (10.0.0.1:53216). Sep 5 00:07:46.496388 systemd-logind[1438]: Removed session 22. Sep 5 00:07:46.538325 sshd[5902]: Accepted publickey for core from 10.0.0.1 port 53216 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:46.539894 sshd[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:46.544206 systemd-logind[1438]: New session 23 of user core. Sep 5 00:07:46.550964 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:07:46.659446 sshd[5902]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:46.663257 systemd[1]: sshd@22-10.0.0.21:22-10.0.0.1:53216.service: Deactivated successfully. Sep 5 00:07:46.665232 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:07:46.665886 systemd-logind[1438]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:07:46.666799 systemd-logind[1438]: Removed session 23. Sep 5 00:07:51.688471 systemd[1]: Started sshd@23-10.0.0.21:22-10.0.0.1:47954.service - OpenSSH per-connection server daemon (10.0.0.1:47954). Sep 5 00:07:51.735938 sshd[5922]: Accepted publickey for core from 10.0.0.1 port 47954 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:51.738319 sshd[5922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:51.753636 systemd-logind[1438]: New session 24 of user core. Sep 5 00:07:51.763054 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:07:52.005485 sshd[5922]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:52.009045 systemd[1]: sshd@23-10.0.0.21:22-10.0.0.1:47954.service: Deactivated successfully. Sep 5 00:07:52.014905 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:07:52.025021 systemd-logind[1438]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:07:52.026390 systemd-logind[1438]: Removed session 24. Sep 5 00:07:52.583501 kubelet[2496]: E0905 00:07:52.583436 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:54.578121 kubelet[2496]: E0905 00:07:54.578085 2496 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:57.027320 systemd[1]: Started sshd@24-10.0.0.21:22-10.0.0.1:47962.service - OpenSSH per-connection server daemon (10.0.0.1:47962). Sep 5 00:07:57.092438 sshd[5965]: Accepted publickey for core from 10.0.0.1 port 47962 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:07:57.094281 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:57.098409 systemd-logind[1438]: New session 25 of user core. Sep 5 00:07:57.105982 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:07:57.263600 sshd[5965]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:57.267656 systemd[1]: sshd@24-10.0.0.21:22-10.0.0.1:47962.service: Deactivated successfully. Sep 5 00:07:57.269901 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:07:57.270592 systemd-logind[1438]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:07:57.271589 systemd-logind[1438]: Removed session 25. Sep 5 00:08:02.281730 systemd[1]: Started sshd@25-10.0.0.21:22-10.0.0.1:54514.service - OpenSSH per-connection server daemon (10.0.0.1:54514). Sep 5 00:08:02.344460 sshd[6000]: Accepted publickey for core from 10.0.0.1 port 54514 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:08:02.346702 sshd[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:08:02.351673 systemd-logind[1438]: New session 26 of user core. Sep 5 00:08:02.360985 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:08:02.555133 sshd[6000]: pam_unix(sshd:session): session closed for user core Sep 5 00:08:02.559590 systemd[1]: sshd@25-10.0.0.21:22-10.0.0.1:54514.service: Deactivated successfully. Sep 5 00:08:02.561547 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:08:02.562348 systemd-logind[1438]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:08:02.563358 systemd-logind[1438]: Removed session 26. Sep 5 00:08:07.568612 systemd[1]: Started sshd@26-10.0.0.21:22-10.0.0.1:54520.service - OpenSSH per-connection server daemon (10.0.0.1:54520). Sep 5 00:08:07.633016 sshd[6037]: Accepted publickey for core from 10.0.0.1 port 54520 ssh2: RSA SHA256:BZINmxpJK+dBFsCIl36ecPsD/s2RBe3WWZDu7gdExMg Sep 5 00:08:07.635165 sshd[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:08:07.641177 systemd-logind[1438]: New session 27 of user core. Sep 5 00:08:07.649688 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:08:07.848719 sshd[6037]: pam_unix(sshd:session): session closed for user core Sep 5 00:08:07.852702 systemd[1]: sshd@26-10.0.0.21:22-10.0.0.1:54520.service: Deactivated successfully. Sep 5 00:08:07.854687 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:08:07.855331 systemd-logind[1438]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:08:07.856284 systemd-logind[1438]: Removed session 27.