Sep 12 17:30:36.016711 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:30:36.016744 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:30:36.016755 kernel: BIOS-provided physical RAM map: Sep 12 17:30:36.016761 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 12 17:30:36.016768 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 12 17:30:36.016774 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 12 17:30:36.016781 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 12 17:30:36.016788 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 12 17:30:36.016794 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 12 17:30:36.016800 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 12 17:30:36.016809 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 12 17:30:36.016816 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 12 17:30:36.016825 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 12 17:30:36.016832 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 12 17:30:36.016842 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 12 17:30:36.016849 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 12 17:30:36.016859 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 12 17:30:36.016866 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 12 17:30:36.016885 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 12 17:30:36.016893 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 17:30:36.016900 kernel: NX (Execute Disable) protection: active Sep 12 17:30:36.016906 kernel: APIC: Static calls initialized Sep 12 17:30:36.016913 kernel: efi: EFI v2.7 by EDK II Sep 12 17:30:36.016920 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Sep 12 17:30:36.016927 kernel: SMBIOS 2.8 present. Sep 12 17:30:36.016934 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 12 17:30:36.016941 kernel: Hypervisor detected: KVM Sep 12 17:30:36.016951 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:30:36.016958 kernel: kvm-clock: using sched offset of 4690149810 cycles Sep 12 17:30:36.016966 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:30:36.016973 kernel: tsc: Detected 2794.748 MHz processor Sep 12 17:30:36.016980 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:30:36.016988 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:30:36.016995 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 12 17:30:36.017002 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 12 17:30:36.017009 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:30:36.017019 kernel: Using GB pages for direct mapping Sep 12 17:30:36.017026 kernel: Secure boot disabled Sep 12 17:30:36.017033 kernel: ACPI: Early table checksum verification disabled Sep 12 17:30:36.017040 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 12 17:30:36.017050 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:30:36.017058 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:30:36.017065 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:30:36.017083 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 12 17:30:36.017091 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:30:36.017101 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:30:36.017108 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:30:36.017116 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:30:36.017123 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 12 17:30:36.017132 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 12 17:30:36.017142 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 12 17:30:36.017149 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 12 17:30:36.017158 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 12 17:30:36.017167 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 12 17:30:36.017176 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 12 17:30:36.017185 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 12 17:30:36.017194 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 12 17:30:36.017203 kernel: No NUMA configuration found Sep 12 17:30:36.017242 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 12 17:30:36.017255 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 12 17:30:36.017264 kernel: Zone ranges: Sep 12 17:30:36.017271 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:30:36.017278 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 12 17:30:36.017285 kernel: Normal empty Sep 12 17:30:36.017293 kernel: Movable zone start for each node Sep 12 17:30:36.017300 kernel: Early memory node ranges Sep 12 17:30:36.017307 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 12 17:30:36.017314 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 12 17:30:36.017321 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 12 17:30:36.017331 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 12 17:30:36.017338 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 12 17:30:36.017345 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 12 17:30:36.017355 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 12 17:30:36.017362 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:30:36.017369 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 12 17:30:36.017376 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 12 17:30:36.017383 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:30:36.017391 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 12 17:30:36.017400 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 12 17:30:36.017408 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 12 17:30:36.017415 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:30:36.017422 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:30:36.017429 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:30:36.017436 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:30:36.017444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:30:36.017451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:30:36.017458 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:30:36.017468 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:30:36.017475 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:30:36.017482 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:30:36.017489 kernel: TSC deadline timer available Sep 12 17:30:36.017506 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 12 17:30:36.017530 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:30:36.017554 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 12 17:30:36.017562 kernel: kvm-guest: setup PV sched yield Sep 12 17:30:36.017569 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 12 17:30:36.017580 kernel: Booting paravirtualized kernel on KVM Sep 12 17:30:36.017587 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:30:36.017595 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 12 17:30:36.017602 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 12 17:30:36.017609 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 12 17:30:36.017617 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 12 17:30:36.017624 kernel: kvm-guest: PV spinlocks enabled Sep 12 17:30:36.017631 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 17:30:36.017644 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:30:36.017658 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:30:36.017666 kernel: random: crng init done Sep 12 17:30:36.017673 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:30:36.017681 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:30:36.017688 kernel: Fallback order for Node 0: 0 Sep 12 17:30:36.017695 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 12 17:30:36.017702 kernel: Policy zone: DMA32 Sep 12 17:30:36.017709 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:30:36.017720 kernel: Memory: 2395612K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 171128K reserved, 0K cma-reserved) Sep 12 17:30:36.017727 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:30:36.017734 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:30:36.017742 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:30:36.017749 kernel: Dynamic Preempt: voluntary Sep 12 17:30:36.017764 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:30:36.017779 kernel: rcu: RCU event tracing is enabled. Sep 12 17:30:36.017787 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:30:36.017795 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:30:36.017803 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:30:36.017810 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:30:36.017818 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:30:36.017828 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:30:36.017835 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 12 17:30:36.017846 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:30:36.017853 kernel: Console: colour dummy device 80x25 Sep 12 17:30:36.017861 kernel: printk: console [ttyS0] enabled Sep 12 17:30:36.017871 kernel: ACPI: Core revision 20230628 Sep 12 17:30:36.017879 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:30:36.017886 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:30:36.017894 kernel: x2apic enabled Sep 12 17:30:36.017901 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:30:36.017909 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 12 17:30:36.017917 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 12 17:30:36.017924 kernel: kvm-guest: setup PV IPIs Sep 12 17:30:36.017932 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:30:36.017942 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 12 17:30:36.017950 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 12 17:30:36.017957 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 17:30:36.017965 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 12 17:30:36.017972 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 12 17:30:36.017980 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:30:36.017987 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:30:36.017995 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:30:36.018003 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 12 17:30:36.018013 kernel: active return thunk: retbleed_return_thunk Sep 12 17:30:36.018020 kernel: RETBleed: Mitigation: untrained return thunk Sep 12 17:30:36.018028 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:30:36.018036 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:30:36.018045 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 12 17:30:36.018054 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 12 17:30:36.018061 kernel: active return thunk: srso_return_thunk Sep 12 17:30:36.018077 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 12 17:30:36.018088 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:30:36.018095 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:30:36.018103 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:30:36.018111 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:30:36.018119 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 12 17:30:36.018127 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:30:36.018134 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:30:36.018142 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:30:36.018149 kernel: landlock: Up and running. Sep 12 17:30:36.018159 kernel: SELinux: Initializing. Sep 12 17:30:36.018167 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:30:36.018174 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:30:36.018182 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 12 17:30:36.018190 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:30:36.018198 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:30:36.018206 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:30:36.018226 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 12 17:30:36.018234 kernel: ... version: 0 Sep 12 17:30:36.018244 kernel: ... bit width: 48 Sep 12 17:30:36.018252 kernel: ... generic registers: 6 Sep 12 17:30:36.018259 kernel: ... value mask: 0000ffffffffffff Sep 12 17:30:36.018267 kernel: ... max period: 00007fffffffffff Sep 12 17:30:36.018275 kernel: ... fixed-purpose events: 0 Sep 12 17:30:36.018282 kernel: ... event mask: 000000000000003f Sep 12 17:30:36.018289 kernel: signal: max sigframe size: 1776 Sep 12 17:30:36.018298 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:30:36.018307 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:30:36.018317 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:30:36.018326 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:30:36.018335 kernel: .... node #0, CPUs: #1 #2 #3 Sep 12 17:30:36.018343 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:30:36.018353 kernel: smpboot: Max logical packages: 1 Sep 12 17:30:36.018360 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 12 17:30:36.018368 kernel: devtmpfs: initialized Sep 12 17:30:36.018375 kernel: x86/mm: Memory block size: 128MB Sep 12 17:30:36.018383 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 12 17:30:36.018393 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 12 17:30:36.018401 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 12 17:30:36.018409 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 12 17:30:36.018416 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 12 17:30:36.018424 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:30:36.018432 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:30:36.018439 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:30:36.018458 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:30:36.018489 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:30:36.018510 kernel: audit: type=2000 audit(1757698234.289:1): state=initialized audit_enabled=0 res=1 Sep 12 17:30:36.018518 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:30:36.018526 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:30:36.018533 kernel: cpuidle: using governor menu Sep 12 17:30:36.018541 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:30:36.018548 kernel: dca service started, version 1.12.1 Sep 12 17:30:36.018556 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 17:30:36.018564 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 17:30:36.018576 kernel: PCI: Using configuration type 1 for base access Sep 12 17:30:36.018588 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:30:36.018595 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:30:36.018603 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:30:36.018611 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:30:36.018618 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:30:36.018626 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:30:36.018633 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:30:36.018641 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:30:36.018648 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:30:36.018659 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:30:36.018667 kernel: ACPI: Interpreter enabled Sep 12 17:30:36.018674 kernel: ACPI: PM: (supports S0 S3 S5) Sep 12 17:30:36.018682 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:30:36.018689 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:30:36.018697 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:30:36.018705 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 17:30:36.018712 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:30:36.018997 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:30:36.019155 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 12 17:30:36.020401 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 12 17:30:36.020416 kernel: PCI host bridge to bus 0000:00 Sep 12 17:30:36.020575 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:30:36.020695 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:30:36.020811 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:30:36.020934 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 12 17:30:36.021050 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 17:30:36.021176 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 12 17:30:36.021320 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:30:36.021498 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 17:30:36.021710 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 12 17:30:36.021882 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 12 17:30:36.022057 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 12 17:30:36.022259 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 12 17:30:36.022427 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 12 17:30:36.022605 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:30:36.022797 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 12 17:30:36.022955 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 12 17:30:36.023105 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 12 17:30:36.023307 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 12 17:30:36.023499 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:30:36.023650 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 12 17:30:36.023788 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 12 17:30:36.023918 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 12 17:30:36.024063 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:30:36.024263 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 12 17:30:36.024400 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 12 17:30:36.024571 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 12 17:30:36.024737 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 12 17:30:36.024907 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 17:30:36.025040 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 17:30:36.025204 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 17:30:36.025422 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 12 17:30:36.025548 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 12 17:30:36.025690 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 17:30:36.025815 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 12 17:30:36.025825 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:30:36.025833 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:30:36.025841 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:30:36.025854 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:30:36.025862 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 17:30:36.025870 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 17:30:36.025878 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 17:30:36.025885 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 17:30:36.025893 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 17:30:36.025901 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 17:30:36.025909 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 17:30:36.025916 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 17:30:36.025927 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 17:30:36.025934 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 17:30:36.025942 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 17:30:36.025950 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 17:30:36.025958 kernel: iommu: Default domain type: Translated Sep 12 17:30:36.025966 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:30:36.025973 kernel: efivars: Registered efivars operations Sep 12 17:30:36.025981 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:30:36.025988 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:30:36.025999 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 12 17:30:36.026006 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 12 17:30:36.026014 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 12 17:30:36.026021 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 12 17:30:36.026167 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 17:30:36.026338 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 17:30:36.026467 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:30:36.026477 kernel: vgaarb: loaded Sep 12 17:30:36.026485 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:30:36.026499 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:30:36.026507 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:30:36.026514 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:30:36.026522 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:30:36.026530 kernel: pnp: PnP ACPI init Sep 12 17:30:36.026680 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 17:30:36.026692 kernel: pnp: PnP ACPI: found 6 devices Sep 12 17:30:36.026700 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:30:36.026711 kernel: NET: Registered PF_INET protocol family Sep 12 17:30:36.026719 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:30:36.026727 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:30:36.026735 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:30:36.026742 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:30:36.026750 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:30:36.026758 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:30:36.026766 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:30:36.026773 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:30:36.026784 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:30:36.026792 kernel: NET: Registered PF_XDP protocol family Sep 12 17:30:36.026918 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 12 17:30:36.027057 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 12 17:30:36.027189 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:30:36.027393 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:30:36.027516 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:30:36.027630 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 12 17:30:36.027751 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 17:30:36.027866 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 12 17:30:36.027876 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:30:36.027884 kernel: Initialise system trusted keyrings Sep 12 17:30:36.027892 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:30:36.027899 kernel: Key type asymmetric registered Sep 12 17:30:36.027907 kernel: Asymmetric key parser 'x509' registered Sep 12 17:30:36.027915 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:30:36.027926 kernel: io scheduler mq-deadline registered Sep 12 17:30:36.027935 kernel: io scheduler kyber registered Sep 12 17:30:36.027942 kernel: io scheduler bfq registered Sep 12 17:30:36.027950 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:30:36.027958 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 17:30:36.027966 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 17:30:36.027974 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 17:30:36.027981 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:30:36.027989 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:30:36.027997 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:30:36.028008 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:30:36.028015 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:30:36.028023 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:30:36.028179 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 12 17:30:36.028321 kernel: rtc_cmos 00:04: registered as rtc0 Sep 12 17:30:36.028440 kernel: rtc_cmos 00:04: setting system clock to 2025-09-12T17:30:35 UTC (1757698235) Sep 12 17:30:36.028557 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 12 17:30:36.028572 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 12 17:30:36.028580 kernel: efifb: probing for efifb Sep 12 17:30:36.028588 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 12 17:30:36.028596 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 12 17:30:36.028603 kernel: efifb: scrolling: redraw Sep 12 17:30:36.028611 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 12 17:30:36.028619 kernel: Console: switching to colour frame buffer device 100x37 Sep 12 17:30:36.028644 kernel: fb0: EFI VGA frame buffer device Sep 12 17:30:36.028655 kernel: pstore: Using crash dump compression: deflate Sep 12 17:30:36.028666 kernel: pstore: Registered efi_pstore as persistent store backend Sep 12 17:30:36.028674 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:30:36.028681 kernel: Segment Routing with IPv6 Sep 12 17:30:36.028689 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:30:36.028697 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:30:36.028705 kernel: Key type dns_resolver registered Sep 12 17:30:36.028713 kernel: IPI shorthand broadcast: enabled Sep 12 17:30:36.028721 kernel: sched_clock: Marking stable (1209003227, 147079002)->(1514139753, -158057524) Sep 12 17:30:36.028729 kernel: registered taskstats version 1 Sep 12 17:30:36.028737 kernel: Loading compiled-in X.509 certificates Sep 12 17:30:36.028747 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:30:36.028755 kernel: Key type .fscrypt registered Sep 12 17:30:36.028763 kernel: Key type fscrypt-provisioning registered Sep 12 17:30:36.028771 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:30:36.028779 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:30:36.028787 kernel: ima: No architecture policies found Sep 12 17:30:36.028794 kernel: clk: Disabling unused clocks Sep 12 17:30:36.028802 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:30:36.028813 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:30:36.028821 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:30:36.028829 kernel: Run /init as init process Sep 12 17:30:36.028837 kernel: with arguments: Sep 12 17:30:36.028844 kernel: /init Sep 12 17:30:36.030029 kernel: with environment: Sep 12 17:30:36.030041 kernel: HOME=/ Sep 12 17:30:36.030049 kernel: TERM=linux Sep 12 17:30:36.030057 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:30:36.030082 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:30:36.030092 systemd[1]: Detected virtualization kvm. Sep 12 17:30:36.030101 systemd[1]: Detected architecture x86-64. Sep 12 17:30:36.030109 systemd[1]: Running in initrd. Sep 12 17:30:36.030122 systemd[1]: No hostname configured, using default hostname. Sep 12 17:30:36.030131 systemd[1]: Hostname set to . Sep 12 17:30:36.030141 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:30:36.030152 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:30:36.030164 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:30:36.030176 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:30:36.030189 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:30:36.030200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:30:36.030224 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:30:36.030233 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:30:36.030244 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:30:36.030252 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:30:36.030261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:30:36.030269 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:30:36.030278 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:30:36.030289 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:30:36.030298 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:30:36.030306 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:30:36.030314 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:30:36.030323 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:30:36.030331 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:30:36.030340 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:30:36.030348 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:30:36.030359 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:30:36.030368 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:30:36.030376 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:30:36.030384 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:30:36.030393 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:30:36.030401 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:30:36.030409 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:30:36.030417 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:30:36.030426 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:30:36.030437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:30:36.030446 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:30:36.030454 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:30:36.030462 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:30:36.030471 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:30:36.030506 systemd-journald[192]: Collecting audit messages is disabled. Sep 12 17:30:36.030526 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:30:36.030534 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:30:36.030546 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:30:36.030554 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:30:36.030563 systemd-journald[192]: Journal started Sep 12 17:30:36.030581 systemd-journald[192]: Runtime Journal (/run/log/journal/9de181d221a749359e6968e52b4c0b82) is 6.0M, max 48.3M, 42.2M free. Sep 12 17:30:36.015726 systemd-modules-load[194]: Inserted module 'overlay' Sep 12 17:30:36.034856 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:30:36.035420 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:30:36.048285 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:30:36.049970 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:30:36.053235 kernel: Bridge firewalling registered Sep 12 17:30:36.053262 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 12 17:30:36.055067 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:30:36.057170 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:30:36.059940 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:30:36.082383 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:30:36.085111 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:30:36.101015 dracut-cmdline[223]: dracut-dracut-053 Sep 12 17:30:36.104240 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:30:36.106796 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:30:36.120428 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:30:36.153281 systemd-resolved[248]: Positive Trust Anchors: Sep 12 17:30:36.153305 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:30:36.153340 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:30:36.156181 systemd-resolved[248]: Defaulting to hostname 'linux'. Sep 12 17:30:36.157643 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:30:36.162981 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:30:36.219271 kernel: SCSI subsystem initialized Sep 12 17:30:36.231262 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:30:36.244260 kernel: iscsi: registered transport (tcp) Sep 12 17:30:36.273280 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:30:36.273385 kernel: QLogic iSCSI HBA Driver Sep 12 17:30:36.344609 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:30:36.349413 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:30:36.378393 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:30:36.378452 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:30:36.379412 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:30:36.424259 kernel: raid6: avx2x4 gen() 21332 MB/s Sep 12 17:30:36.441254 kernel: raid6: avx2x2 gen() 21030 MB/s Sep 12 17:30:36.470256 kernel: raid6: avx2x1 gen() 17368 MB/s Sep 12 17:30:36.470340 kernel: raid6: using algorithm avx2x4 gen() 21332 MB/s Sep 12 17:30:36.487449 kernel: raid6: .... xor() 6110 MB/s, rmw enabled Sep 12 17:30:36.487518 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:30:36.512265 kernel: xor: automatically using best checksumming function avx Sep 12 17:30:36.678251 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:30:36.695038 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:30:36.707750 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:30:36.722617 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 12 17:30:36.728289 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:30:36.736413 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:30:36.761724 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 12 17:30:36.805410 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:30:36.813563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:30:36.893595 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:30:36.909643 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:30:36.924094 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:30:36.927592 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:30:36.930512 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:30:36.932985 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:30:36.941440 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:30:36.959306 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 12 17:30:36.959585 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:30:36.956077 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:30:36.964664 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:30:36.970241 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:30:36.970296 kernel: GPT:9289727 != 19775487 Sep 12 17:30:36.970310 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:30:36.970320 kernel: GPT:9289727 != 19775487 Sep 12 17:30:36.970330 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:30:36.970339 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:30:36.970728 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:30:36.970909 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:30:36.975988 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:30:36.978799 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:30:36.979060 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:30:36.981185 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:30:36.985238 kernel: libata version 3.00 loaded. Sep 12 17:30:36.988538 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:30:36.995585 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 17:30:36.995813 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 17:30:36.998994 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 17:30:36.999372 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 17:30:37.007195 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:30:37.007389 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:30:37.014437 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:30:37.014455 kernel: AES CTR mode by8 optimization enabled Sep 12 17:30:37.025159 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (473) Sep 12 17:30:37.023868 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:30:37.032936 kernel: scsi host0: ahci Sep 12 17:30:37.031626 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:30:37.036242 kernel: scsi host1: ahci Sep 12 17:30:37.040352 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (464) Sep 12 17:30:37.040376 kernel: scsi host2: ahci Sep 12 17:30:37.042358 kernel: scsi host3: ahci Sep 12 17:30:37.043544 kernel: scsi host4: ahci Sep 12 17:30:37.045916 kernel: scsi host5: ahci Sep 12 17:30:37.046127 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 12 17:30:37.046140 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 12 17:30:37.047392 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 12 17:30:37.047421 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 12 17:30:37.047360 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:30:37.053062 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 12 17:30:37.053079 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 12 17:30:37.053292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:30:37.070901 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:30:37.077284 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:30:37.078521 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:30:37.093377 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:30:37.095409 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:30:37.101119 disk-uuid[560]: Primary Header is updated. Sep 12 17:30:37.101119 disk-uuid[560]: Secondary Entries is updated. Sep 12 17:30:37.101119 disk-uuid[560]: Secondary Header is updated. Sep 12 17:30:37.105247 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:30:37.111251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:30:37.120531 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:30:37.362243 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 17:30:37.362329 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 17:30:37.362341 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 17:30:37.363253 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 17:30:37.363349 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 12 17:30:37.364238 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 17:30:37.365251 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 12 17:30:37.366427 kernel: ata3.00: applying bridge limits Sep 12 17:30:37.366459 kernel: ata3.00: configured for UDMA/100 Sep 12 17:30:37.367245 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 12 17:30:37.415754 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 12 17:30:37.416049 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 12 17:30:37.431494 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 12 17:30:38.112266 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:30:38.112332 disk-uuid[561]: The operation has completed successfully. Sep 12 17:30:38.145452 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:30:38.145607 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:30:38.173483 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:30:38.179853 sh[597]: Success Sep 12 17:30:38.194254 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 12 17:30:38.231374 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:30:38.244980 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:30:38.249458 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:30:38.262187 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:30:38.262253 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:30:38.262266 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:30:38.262278 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:30:38.262901 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:30:38.268681 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:30:38.271068 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:30:38.286578 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:30:38.287971 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:30:38.302542 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:30:38.302592 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:30:38.302604 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:30:38.306245 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:30:38.316995 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:30:38.318584 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:30:38.329876 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:30:38.336470 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:30:38.408397 ignition[697]: Ignition 2.19.0 Sep 12 17:30:38.408411 ignition[697]: Stage: fetch-offline Sep 12 17:30:38.408457 ignition[697]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:30:38.408470 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:30:38.408606 ignition[697]: parsed url from cmdline: "" Sep 12 17:30:38.408611 ignition[697]: no config URL provided Sep 12 17:30:38.408618 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:30:38.408631 ignition[697]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:30:38.408666 ignition[697]: op(1): [started] loading QEMU firmware config module Sep 12 17:30:38.408672 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:30:38.423036 ignition[697]: op(1): [finished] loading QEMU firmware config module Sep 12 17:30:38.432283 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:30:38.449435 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:30:38.465559 ignition[697]: parsing config with SHA512: a52e3e7d7224daedcd8312d75bf27e0c30857e9d28fb4ce316d2618a5163cd38d5c9dd444a70ba4e8566cdb3d41497334977cd061121fe628e8a97104a11f98a Sep 12 17:30:38.470454 unknown[697]: fetched base config from "system" Sep 12 17:30:38.470475 unknown[697]: fetched user config from "qemu" Sep 12 17:30:38.474077 systemd-networkd[786]: lo: Link UP Sep 12 17:30:38.474089 systemd-networkd[786]: lo: Gained carrier Sep 12 17:30:38.476404 systemd-networkd[786]: Enumeration completed Sep 12 17:30:38.476411 ignition[697]: fetch-offline: fetch-offline passed Sep 12 17:30:38.476528 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:30:38.477165 ignition[697]: Ignition finished successfully Sep 12 17:30:38.477092 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:30:38.477097 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:30:38.478268 systemd-networkd[786]: eth0: Link UP Sep 12 17:30:38.478273 systemd-networkd[786]: eth0: Gained carrier Sep 12 17:30:38.478283 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:30:38.478331 systemd[1]: Reached target network.target - Network. Sep 12 17:30:38.492428 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:30:38.493997 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:30:38.500485 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:30:38.504292 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:30:38.518459 ignition[789]: Ignition 2.19.0 Sep 12 17:30:38.518472 ignition[789]: Stage: kargs Sep 12 17:30:38.518645 ignition[789]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:30:38.518658 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:30:38.519520 ignition[789]: kargs: kargs passed Sep 12 17:30:38.519570 ignition[789]: Ignition finished successfully Sep 12 17:30:38.523555 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:30:38.537472 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:30:38.551975 ignition[798]: Ignition 2.19.0 Sep 12 17:30:38.551993 ignition[798]: Stage: disks Sep 12 17:30:38.552283 ignition[798]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:30:38.552298 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:30:38.553149 ignition[798]: disks: disks passed Sep 12 17:30:38.555782 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:30:38.553201 ignition[798]: Ignition finished successfully Sep 12 17:30:38.557916 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:30:38.559937 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:30:38.562021 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:30:38.564073 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:30:38.566331 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:30:38.578479 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:30:38.592882 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:30:38.599759 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:30:38.611362 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:30:38.709376 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:30:38.710157 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:30:38.711145 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:30:38.719313 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:30:38.721302 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:30:38.723760 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:30:38.723823 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:30:38.733170 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Sep 12 17:30:38.733193 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:30:38.733204 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:30:38.733235 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:30:38.723855 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:30:38.735293 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:30:38.737095 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:30:38.745894 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:30:38.747973 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:30:38.788633 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:30:38.792485 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:30:38.797047 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:30:38.801571 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:30:38.891832 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:30:38.902401 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:30:38.904455 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:30:38.912252 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:30:38.931019 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:30:39.025234 ignition[932]: INFO : Ignition 2.19.0 Sep 12 17:30:39.025234 ignition[932]: INFO : Stage: mount Sep 12 17:30:39.027091 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:30:39.027091 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:30:39.027091 ignition[932]: INFO : mount: mount passed Sep 12 17:30:39.027091 ignition[932]: INFO : Ignition finished successfully Sep 12 17:30:39.028508 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:30:39.039396 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:30:39.260547 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:30:39.273423 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:30:39.284240 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Sep 12 17:30:39.286292 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:30:39.286306 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:30:39.286317 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:30:39.290238 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:30:39.291657 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:30:39.328795 ignition[958]: INFO : Ignition 2.19.0 Sep 12 17:30:39.328795 ignition[958]: INFO : Stage: files Sep 12 17:30:39.330642 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:30:39.330642 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:30:39.330642 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:30:39.334251 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:30:39.334251 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:30:39.334251 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:30:39.334251 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:30:39.339579 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:30:39.339579 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:30:39.339579 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 12 17:30:39.334440 unknown[958]: wrote ssh authorized keys file for user: core Sep 12 17:30:39.389447 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:30:40.298411 systemd-networkd[786]: eth0: Gained IPv6LL Sep 12 17:30:40.766812 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 12 17:30:40.766812 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:30:40.773639 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:30:40.775538 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:30:40.778571 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:30:40.778571 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:30:40.778571 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:30:40.778571 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:30:40.786303 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:30:40.788491 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:30:40.790717 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:30:40.792577 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:30:40.795353 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:30:40.797855 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:30:40.800017 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 12 17:30:41.317774 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:30:42.078369 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 12 17:30:42.078369 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:30:42.082875 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:30:42.085412 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:30:42.085412 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:30:42.085412 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 17:30:42.089723 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:30:42.091623 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:30:42.091623 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 17:30:42.094900 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:30:42.126273 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:30:42.136473 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:30:42.138435 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:30:42.138435 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:30:42.138435 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:30:42.138435 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:30:42.138435 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:30:42.138435 ignition[958]: INFO : files: files passed Sep 12 17:30:42.138435 ignition[958]: INFO : Ignition finished successfully Sep 12 17:30:42.141044 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:30:42.149423 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:30:42.150695 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:30:42.156782 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:30:42.157032 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:30:42.164131 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:30:42.167407 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:30:42.169260 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:30:42.172401 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:30:42.170311 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:30:42.172974 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:30:42.185425 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:30:42.233298 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:30:42.233461 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:30:42.236184 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:30:42.238601 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:30:42.241493 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:30:42.253466 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:30:42.270536 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:30:42.283419 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:30:42.297983 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:30:42.299451 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:30:42.301809 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:30:42.304041 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:30:42.304206 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:30:42.306517 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:30:42.308397 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:30:42.310600 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:30:42.312716 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:30:42.314893 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:30:42.317242 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:30:42.319515 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:30:42.322025 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:30:42.324285 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:30:42.326712 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:30:42.328613 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:30:42.328763 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:30:42.331044 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:30:42.332750 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:30:42.334941 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:30:42.335141 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:30:42.337298 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:30:42.337433 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:30:42.339795 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:30:42.340145 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:30:42.342233 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:30:42.344110 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:30:42.348312 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:30:42.350414 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:30:42.352580 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:30:42.354549 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:30:42.354668 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:30:42.356976 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:30:42.357206 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:30:42.359721 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:30:42.359876 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:30:42.361993 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:30:42.362152 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:30:42.372390 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:30:42.373859 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:30:42.374028 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:30:42.377488 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:30:42.378517 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:30:42.378952 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:30:42.381418 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:30:42.381654 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:30:42.399642 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:30:42.399785 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:30:42.407375 ignition[1014]: INFO : Ignition 2.19.0 Sep 12 17:30:42.407375 ignition[1014]: INFO : Stage: umount Sep 12 17:30:42.409502 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:30:42.409502 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:30:42.409502 ignition[1014]: INFO : umount: umount passed Sep 12 17:30:42.409502 ignition[1014]: INFO : Ignition finished successfully Sep 12 17:30:42.410414 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:30:42.410586 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:30:42.413726 systemd[1]: Stopped target network.target - Network. Sep 12 17:30:42.414850 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:30:42.414941 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:30:42.416991 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:30:42.417059 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:30:42.419131 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:30:42.419195 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:30:42.421308 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:30:42.421378 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:30:42.423440 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:30:42.425710 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:30:42.429094 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:30:42.431450 systemd-networkd[786]: eth0: DHCPv6 lease lost Sep 12 17:30:42.434586 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:30:42.434787 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:30:42.437712 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:30:42.437810 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:30:42.446408 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:30:42.448021 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:30:42.448102 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:30:42.450613 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:30:42.453384 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:30:42.453557 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:30:42.461461 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:30:42.461578 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:30:42.463804 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:30:42.463872 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:30:42.464551 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:30:42.464612 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:30:42.469744 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:30:42.473451 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:30:42.476861 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:30:42.477035 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:30:42.479212 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:30:42.479323 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:30:42.480694 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:30:42.480749 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:30:42.481007 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:30:42.481075 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:30:42.481983 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:30:42.482049 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:30:42.482855 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:30:42.482933 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:30:42.515988 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:30:42.518132 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:30:42.518274 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:30:42.518675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:30:42.518726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:30:42.536110 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:30:42.536271 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:30:42.780737 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:30:42.780953 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:30:42.784510 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:30:42.786657 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:30:42.786747 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:30:42.804414 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:30:42.813551 systemd[1]: Switching root. Sep 12 17:30:42.850504 systemd-journald[192]: Journal stopped Sep 12 17:30:44.739932 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 12 17:30:44.740014 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:30:44.740031 kernel: SELinux: policy capability open_perms=1 Sep 12 17:30:44.740045 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:30:44.740066 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:30:44.740089 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:30:44.740104 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:30:44.740117 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:30:44.740131 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:30:44.740151 kernel: audit: type=1403 audit(1757698243.689:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:30:44.740166 systemd[1]: Successfully loaded SELinux policy in 48.800ms. Sep 12 17:30:44.740190 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.881ms. Sep 12 17:30:44.740205 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:30:44.740233 systemd[1]: Detected virtualization kvm. Sep 12 17:30:44.740251 systemd[1]: Detected architecture x86-64. Sep 12 17:30:44.740266 systemd[1]: Detected first boot. Sep 12 17:30:44.740280 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:30:44.740295 zram_generator::config[1059]: No configuration found. Sep 12 17:30:44.740310 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:30:44.740325 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:30:44.740339 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:30:44.740354 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:30:44.740372 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:30:44.740387 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:30:44.740402 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:30:44.740423 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:30:44.740444 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:30:44.740459 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:30:44.740474 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:30:44.740488 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:30:44.740515 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:30:44.740533 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:30:44.740547 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:30:44.740562 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:30:44.740577 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:30:44.740592 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:30:44.740606 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:30:44.740621 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:30:44.740635 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:30:44.740650 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:30:44.740667 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:30:44.740682 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:30:44.740697 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:30:44.740711 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:30:44.740726 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:30:44.740740 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:30:44.740755 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:30:44.740769 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:30:44.740787 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:30:44.740801 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:30:44.740816 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:30:44.740845 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:30:44.740860 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:30:44.740875 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:30:44.740889 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:30:44.740905 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:30:44.740922 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:30:44.740940 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:30:44.740954 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:30:44.740969 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:30:44.740984 systemd[1]: Reached target machines.target - Containers. Sep 12 17:30:44.740999 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:30:44.741014 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:30:44.741028 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:30:44.741043 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:30:44.741060 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:30:44.741075 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:30:44.741090 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:30:44.741105 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:30:44.741120 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:30:44.741135 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:30:44.741149 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:30:44.741168 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:30:44.741186 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:30:44.741200 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:30:44.741227 kernel: fuse: init (API version 7.39) Sep 12 17:30:44.741242 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:30:44.741256 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:30:44.741270 kernel: loop: module loaded Sep 12 17:30:44.741284 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:30:44.741299 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:30:44.741332 systemd-journald[1133]: Collecting audit messages is disabled. Sep 12 17:30:44.741361 systemd-journald[1133]: Journal started Sep 12 17:30:44.741388 systemd-journald[1133]: Runtime Journal (/run/log/journal/9de181d221a749359e6968e52b4c0b82) is 6.0M, max 48.3M, 42.2M free. Sep 12 17:30:44.741429 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:30:44.438165 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:30:44.462139 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:30:44.462645 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:30:44.745497 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:30:44.745526 systemd[1]: Stopped verity-setup.service. Sep 12 17:30:44.749257 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:30:44.752380 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:30:44.754494 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:30:44.756256 kernel: ACPI: bus type drm_connector registered Sep 12 17:30:44.756739 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:30:44.758078 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:30:44.759185 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:30:44.760405 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:30:44.761622 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:30:44.762875 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:30:44.764429 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:30:44.766110 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:30:44.766342 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:30:44.789076 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:30:44.789269 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:30:44.790760 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:30:44.790955 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:30:44.792400 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:30:44.792576 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:30:44.794304 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:30:44.794744 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:30:44.796465 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:30:44.796674 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:30:44.798166 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:30:44.799584 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:30:44.801471 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:30:44.816925 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:30:44.829310 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:30:44.831727 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:30:44.832879 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:30:44.832910 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:30:44.835013 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:30:44.838400 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:30:44.841603 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:30:44.842772 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:30:44.845993 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:30:44.848793 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:30:44.850204 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:30:44.852445 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:30:44.855987 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:30:44.857599 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:30:44.865341 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:30:44.881356 systemd-journald[1133]: Time spent on flushing to /var/log/journal/9de181d221a749359e6968e52b4c0b82 is 24.590ms for 992 entries. Sep 12 17:30:44.881356 systemd-journald[1133]: System Journal (/var/log/journal/9de181d221a749359e6968e52b4c0b82) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:30:44.925926 systemd-journald[1133]: Received client request to flush runtime journal. Sep 12 17:30:44.926001 kernel: loop0: detected capacity change from 0 to 229808 Sep 12 17:30:44.868003 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:30:44.871302 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:30:44.872938 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:30:44.874237 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:30:44.876048 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:30:44.883791 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:30:44.887024 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:30:44.897762 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:30:44.900802 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:30:44.904703 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:30:44.924400 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:30:44.937527 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:30:44.938455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:30:44.942797 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:30:44.947939 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:30:44.969459 kernel: loop1: detected capacity change from 0 to 142488 Sep 12 17:30:44.970080 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Sep 12 17:30:44.970099 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Sep 12 17:30:44.977613 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:30:45.005824 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:30:45.007886 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:30:45.010253 kernel: loop2: detected capacity change from 0 to 140768 Sep 12 17:30:45.051324 kernel: loop3: detected capacity change from 0 to 229808 Sep 12 17:30:45.062247 kernel: loop4: detected capacity change from 0 to 142488 Sep 12 17:30:45.075240 kernel: loop5: detected capacity change from 0 to 140768 Sep 12 17:30:45.087004 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:30:45.087904 (sd-merge)[1199]: Merged extensions into '/usr'. Sep 12 17:30:45.093478 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:30:45.093499 systemd[1]: Reloading... Sep 12 17:30:45.160372 zram_generator::config[1225]: No configuration found. Sep 12 17:30:45.228792 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:30:45.309441 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:30:45.372297 systemd[1]: Reloading finished in 278 ms. Sep 12 17:30:45.408565 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:30:45.410233 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:30:45.423405 systemd[1]: Starting ensure-sysext.service... Sep 12 17:30:45.425545 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:30:45.435188 systemd[1]: Reloading requested from client PID 1262 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:30:45.435204 systemd[1]: Reloading... Sep 12 17:30:45.488786 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:30:45.489344 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:30:45.491286 zram_generator::config[1290]: No configuration found. Sep 12 17:30:45.492780 systemd-tmpfiles[1263]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:30:45.493199 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 12 17:30:45.493329 systemd-tmpfiles[1263]: ACLs are not supported, ignoring. Sep 12 17:30:45.497539 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:30:45.497556 systemd-tmpfiles[1263]: Skipping /boot Sep 12 17:30:45.511330 systemd-tmpfiles[1263]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:30:45.511348 systemd-tmpfiles[1263]: Skipping /boot Sep 12 17:30:45.615541 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:30:45.666846 systemd[1]: Reloading finished in 231 ms. Sep 12 17:30:45.686083 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:30:45.687871 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:30:45.708256 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:30:45.711097 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:30:45.713912 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:30:45.720539 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:30:45.728441 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:30:45.731372 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:30:45.745328 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:30:45.750904 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:30:45.754045 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:30:45.754462 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:30:45.763020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:30:45.765875 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:30:45.770136 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:30:45.774184 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:30:45.774541 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Sep 12 17:30:45.777586 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:30:45.779306 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:30:45.782850 augenrules[1355]: No rules Sep 12 17:30:45.781307 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:30:45.784040 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:30:45.784840 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:30:45.789661 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:30:45.791718 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:30:45.791960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:30:45.793950 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:30:45.794205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:30:45.805329 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:30:45.808147 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:30:45.808500 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:30:45.816690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:30:45.825502 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:30:45.830452 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:30:45.831632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:30:45.831756 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:30:45.832527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:30:45.835566 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:30:45.837970 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:30:45.840789 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:30:45.840986 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:30:45.857271 systemd[1]: Finished ensure-sysext.service. Sep 12 17:30:45.858792 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:30:45.859052 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:30:45.860913 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:30:45.861090 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:30:45.866511 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:30:45.866720 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:30:45.875439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:30:45.878342 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:30:45.879527 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:30:45.891466 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:30:45.892615 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:30:45.895743 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:30:45.897203 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:30:45.897244 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:30:45.897886 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:30:45.898585 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:30:45.900175 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:30:45.900438 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:30:45.906542 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:30:45.906639 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:30:45.945794 systemd-resolved[1333]: Positive Trust Anchors: Sep 12 17:30:45.948091 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:30:45.948179 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:30:45.957377 systemd-resolved[1333]: Defaulting to hostname 'linux'. Sep 12 17:30:45.959253 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1374) Sep 12 17:30:45.959725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:30:45.961165 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:30:45.974607 systemd-networkd[1402]: lo: Link UP Sep 12 17:30:45.974965 systemd-networkd[1402]: lo: Gained carrier Sep 12 17:30:45.976678 systemd-networkd[1402]: Enumeration completed Sep 12 17:30:45.977877 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:30:45.979360 systemd[1]: Reached target network.target - Network. Sep 12 17:30:45.985647 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:30:45.989484 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:30:45.990977 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:30:46.006151 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:30:46.010380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:30:46.021232 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:30:46.026459 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:30:46.026470 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:30:46.027324 systemd-networkd[1402]: eth0: Link UP Sep 12 17:30:46.027334 systemd-networkd[1402]: eth0: Gained carrier Sep 12 17:30:46.027347 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:30:46.028425 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:30:46.034005 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 12 17:30:46.034297 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 17:30:46.034491 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 17:30:46.042249 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 17:30:46.042444 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:30:46.040309 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:30:46.041139 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Sep 12 17:30:46.043399 systemd-timesyncd[1403]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:30:46.043459 systemd-timesyncd[1403]: Initial clock synchronization to Fri 2025-09-12 17:30:46.062939 UTC. Sep 12 17:30:46.047496 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:30:46.069879 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:30:46.083953 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:30:46.084320 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:30:46.086235 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:30:46.096544 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:30:46.171841 kernel: kvm_amd: TSC scaling supported Sep 12 17:30:46.171949 kernel: kvm_amd: Nested Virtualization enabled Sep 12 17:30:46.171966 kernel: kvm_amd: Nested Paging enabled Sep 12 17:30:46.172671 kernel: kvm_amd: LBR virtualization supported Sep 12 17:30:46.173426 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 12 17:30:46.174698 kernel: kvm_amd: Virtual GIF supported Sep 12 17:30:46.195276 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:30:46.212267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:30:46.229816 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:30:46.242705 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:30:46.252612 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:30:46.287772 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:30:46.289516 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:30:46.290770 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:30:46.292102 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:30:46.293523 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:30:46.295130 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:30:46.296458 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:30:46.297869 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:30:46.299259 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:30:46.299299 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:30:46.300285 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:30:46.302309 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:30:46.305300 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:30:46.322118 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:30:46.324688 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:30:46.326400 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:30:46.327710 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:30:46.328782 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:30:46.329834 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:30:46.329869 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:30:46.330960 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:30:46.333280 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:30:46.335977 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:30:46.338335 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:30:46.343416 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:30:46.345090 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:30:46.345445 jq[1440]: false Sep 12 17:30:46.346566 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:30:46.349366 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:30:46.352397 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:30:46.357450 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:30:46.362390 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:30:46.364103 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:30:46.364624 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:30:46.367754 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:30:46.370529 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:30:46.374156 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:30:46.376492 extend-filesystems[1441]: Found loop3 Sep 12 17:30:46.377635 extend-filesystems[1441]: Found loop4 Sep 12 17:30:46.377635 extend-filesystems[1441]: Found loop5 Sep 12 17:30:46.377635 extend-filesystems[1441]: Found sr0 Sep 12 17:30:46.377635 extend-filesystems[1441]: Found vda Sep 12 17:30:46.377635 extend-filesystems[1441]: Found vda1 Sep 12 17:30:46.377635 extend-filesystems[1441]: Found vda2 Sep 12 17:30:46.377635 extend-filesystems[1441]: Found vda3 Sep 12 17:30:46.377635 extend-filesystems[1441]: Found usr Sep 12 17:30:46.377635 extend-filesystems[1441]: Found vda4 Sep 12 17:30:46.377635 extend-filesystems[1441]: Found vda6 Sep 12 17:30:46.377635 extend-filesystems[1441]: Found vda7 Sep 12 17:30:46.377635 extend-filesystems[1441]: Found vda9 Sep 12 17:30:46.377635 extend-filesystems[1441]: Checking size of /dev/vda9 Sep 12 17:30:46.376657 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:30:46.384427 jq[1452]: true Sep 12 17:30:46.386300 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:30:46.402644 dbus-daemon[1439]: [system] SELinux support is enabled Sep 12 17:30:46.409501 update_engine[1451]: I20250912 17:30:46.406770 1451 main.cc:92] Flatcar Update Engine starting Sep 12 17:30:46.409501 update_engine[1451]: I20250912 17:30:46.407943 1451 update_check_scheduler.cc:74] Next update check in 2m5s Sep 12 17:30:46.404595 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:30:46.405290 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:30:46.409973 jq[1460]: true Sep 12 17:30:46.433145 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:30:46.437657 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:30:46.437971 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:30:46.449239 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:30:46.456475 extend-filesystems[1441]: Resized partition /dev/vda9 Sep 12 17:30:46.466245 tar[1459]: linux-amd64/LICENSE Sep 12 17:30:46.467427 tar[1459]: linux-amd64/helm Sep 12 17:30:46.467066 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:30:46.467095 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:30:46.469386 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:30:46.469411 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:30:46.473671 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1387) Sep 12 17:30:46.472647 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:30:46.475909 extend-filesystems[1477]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:30:46.480695 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:30:46.480723 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:30:46.482989 systemd-logind[1448]: New seat seat0. Sep 12 17:30:46.483484 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:30:46.486987 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:30:46.523574 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:30:46.631429 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:30:46.643934 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:30:46.660636 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:30:46.673474 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:30:46.682398 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:30:46.682674 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:30:46.687276 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:30:46.778850 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:30:46.795157 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:30:46.814576 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:30:46.817288 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:30:46.818334 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:30:47.442613 extend-filesystems[1477]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:30:47.442613 extend-filesystems[1477]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:30:47.442613 extend-filesystems[1477]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:30:47.447110 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Sep 12 17:30:47.449257 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:30:47.451228 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:30:47.449699 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:30:47.453689 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:30:47.456056 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:30:47.492981 containerd[1469]: time="2025-09-12T17:30:47.492783834Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:30:47.938526 systemd-networkd[1402]: eth0: Gained IPv6LL Sep 12 17:30:47.944162 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:30:47.946793 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:30:47.957535 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:30:47.961319 containerd[1469]: time="2025-09-12T17:30:47.961271128Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:30:47.964268 containerd[1469]: time="2025-09-12T17:30:47.964157232Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:30:47.964268 containerd[1469]: time="2025-09-12T17:30:47.964189412Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:30:47.964268 containerd[1469]: time="2025-09-12T17:30:47.964208661Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:30:47.964427 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:30:47.964534 containerd[1469]: time="2025-09-12T17:30:47.964517339Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:30:47.964567 containerd[1469]: time="2025-09-12T17:30:47.964540821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:30:47.964674 containerd[1469]: time="2025-09-12T17:30:47.964639729Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:30:47.964674 containerd[1469]: time="2025-09-12T17:30:47.964662840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:30:47.965760 containerd[1469]: time="2025-09-12T17:30:47.965180477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:30:47.965760 containerd[1469]: time="2025-09-12T17:30:47.965205273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:30:47.965760 containerd[1469]: time="2025-09-12T17:30:47.965245487Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:30:47.965760 containerd[1469]: time="2025-09-12T17:30:47.965259822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:30:47.965760 containerd[1469]: time="2025-09-12T17:30:47.965387879Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:30:47.965760 containerd[1469]: time="2025-09-12T17:30:47.965704161Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:30:47.966126 containerd[1469]: time="2025-09-12T17:30:47.966100150Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:30:47.966199 containerd[1469]: time="2025-09-12T17:30:47.966181190Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:30:47.966410 containerd[1469]: time="2025-09-12T17:30:47.966387140Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:30:47.966544 containerd[1469]: time="2025-09-12T17:30:47.966525459Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:30:47.967833 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:30:48.008430 containerd[1469]: time="2025-09-12T17:30:48.007918244Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:30:48.008430 containerd[1469]: time="2025-09-12T17:30:48.008035177Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:30:48.008430 containerd[1469]: time="2025-09-12T17:30:48.008058287Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:30:48.008430 containerd[1469]: time="2025-09-12T17:30:48.008194760Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:30:48.008430 containerd[1469]: time="2025-09-12T17:30:48.008264270Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:30:48.008590 containerd[1469]: time="2025-09-12T17:30:48.008440935Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:30:48.011373 containerd[1469]: time="2025-09-12T17:30:48.011320328Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:30:48.011694 containerd[1469]: time="2025-09-12T17:30:48.011676094Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:30:48.011755 containerd[1469]: time="2025-09-12T17:30:48.011742615Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:30:48.011806 containerd[1469]: time="2025-09-12T17:30:48.011794412Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.011860622Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.011888638Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.011906201Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.011924265Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.011938498Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.011951137Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.011963032Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.011974758Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.012007718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.012021910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.012033887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.012047468Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.012061420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.013689 containerd[1469]: time="2025-09-12T17:30:48.012078071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012093356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012111311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012125393Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012139887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012152315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012164271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012178083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012195476Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012230351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012254514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012271044Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012340685Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012359994Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:30:48.014003 containerd[1469]: time="2025-09-12T17:30:48.012370956Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:30:48.014509 containerd[1469]: time="2025-09-12T17:30:48.012383113Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:30:48.014509 containerd[1469]: time="2025-09-12T17:30:48.012393014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014509 containerd[1469]: time="2025-09-12T17:30:48.012405973Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:30:48.014509 containerd[1469]: time="2025-09-12T17:30:48.012422031Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:30:48.014509 containerd[1469]: time="2025-09-12T17:30:48.012432633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:30:48.014684 containerd[1469]: time="2025-09-12T17:30:48.012700844Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:30:48.014684 containerd[1469]: time="2025-09-12T17:30:48.012766153Z" level=info msg="Connect containerd service" Sep 12 17:30:48.014684 containerd[1469]: time="2025-09-12T17:30:48.012811259Z" level=info msg="using legacy CRI server" Sep 12 17:30:48.014684 containerd[1469]: time="2025-09-12T17:30:48.013972523Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:30:48.014684 containerd[1469]: time="2025-09-12T17:30:48.014131424Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:30:48.015015 containerd[1469]: time="2025-09-12T17:30:48.014861754Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:30:48.014891 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:30:48.015165 containerd[1469]: time="2025-09-12T17:30:48.015123576Z" level=info msg="Start subscribing containerd event" Sep 12 17:30:48.015197 containerd[1469]: time="2025-09-12T17:30:48.015173547Z" level=info msg="Start recovering state" Sep 12 17:30:48.015165 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:30:48.016403 containerd[1469]: time="2025-09-12T17:30:48.015257602Z" level=info msg="Start event monitor" Sep 12 17:30:48.016403 containerd[1469]: time="2025-09-12T17:30:48.015270691Z" level=info msg="Start snapshots syncer" Sep 12 17:30:48.016403 containerd[1469]: time="2025-09-12T17:30:48.015281243Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:30:48.016403 containerd[1469]: time="2025-09-12T17:30:48.015289749Z" level=info msg="Start streaming server" Sep 12 17:30:48.016403 containerd[1469]: time="2025-09-12T17:30:48.015642546Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:30:48.016403 containerd[1469]: time="2025-09-12T17:30:48.015804616Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:30:48.016403 containerd[1469]: time="2025-09-12T17:30:48.015925161Z" level=info msg="containerd successfully booted in 0.525643s" Sep 12 17:30:48.017164 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:30:48.020090 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:30:48.027777 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:30:48.267174 tar[1459]: linux-amd64/README.md Sep 12 17:30:48.372425 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:30:48.573641 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:30:48.611457 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:46166.service - OpenSSH per-connection server daemon (10.0.0.1:46166). Sep 12 17:30:48.712335 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 46166 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:30:48.717480 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:48.732988 systemd-logind[1448]: New session 1 of user core. Sep 12 17:30:48.735323 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:30:48.744630 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:30:48.778958 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:30:48.793606 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:30:48.856163 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:30:49.035957 systemd[1552]: Queued start job for default target default.target. Sep 12 17:30:49.062366 systemd[1552]: Created slice app.slice - User Application Slice. Sep 12 17:30:49.062425 systemd[1552]: Reached target paths.target - Paths. Sep 12 17:30:49.062446 systemd[1552]: Reached target timers.target - Timers. Sep 12 17:30:49.064763 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:30:49.081452 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:30:49.081593 systemd[1552]: Reached target sockets.target - Sockets. Sep 12 17:30:49.081612 systemd[1552]: Reached target basic.target - Basic System. Sep 12 17:30:49.081656 systemd[1552]: Reached target default.target - Main User Target. Sep 12 17:30:49.081693 systemd[1552]: Startup finished in 215ms. Sep 12 17:30:49.081962 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:30:49.084677 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:30:49.164005 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:46174.service - OpenSSH per-connection server daemon (10.0.0.1:46174). Sep 12 17:30:49.224525 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 46174 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:30:49.226733 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:49.230807 systemd-logind[1448]: New session 2 of user core. Sep 12 17:30:49.239365 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:30:49.397865 sshd[1563]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:49.408118 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:46174.service: Deactivated successfully. Sep 12 17:30:49.410295 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:30:49.412119 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:30:49.418705 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:46176.service - OpenSSH per-connection server daemon (10.0.0.1:46176). Sep 12 17:30:49.420619 systemd-logind[1448]: Removed session 2. Sep 12 17:30:49.462293 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 46176 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:30:49.463637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:30:49.464113 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:49.465311 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:30:49.466606 systemd[1]: Startup finished in 1.345s (kernel) + 7.932s (initrd) + 5.825s (userspace) = 15.103s. Sep 12 17:30:49.470408 (kubelet)[1577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:30:49.471053 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:30:49.471582 systemd-logind[1448]: New session 3 of user core. Sep 12 17:30:49.527914 sshd[1570]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:49.532155 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:46176.service: Deactivated successfully. Sep 12 17:30:49.533997 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:30:49.534705 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:30:49.535556 systemd-logind[1448]: Removed session 3. Sep 12 17:30:50.069355 kubelet[1577]: E0912 17:30:50.069270 1577 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:30:50.073976 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:30:50.074210 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:30:50.074669 systemd[1]: kubelet.service: Consumed 1.892s CPU time. Sep 12 17:30:59.552107 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:49250.service - OpenSSH per-connection server daemon (10.0.0.1:49250). Sep 12 17:30:59.591254 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 49250 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:30:59.592936 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:59.596906 systemd-logind[1448]: New session 4 of user core. Sep 12 17:30:59.606339 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:30:59.661541 sshd[1594]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:59.673801 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:49250.service: Deactivated successfully. Sep 12 17:30:59.676045 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:30:59.677607 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:30:59.692503 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:49252.service - OpenSSH per-connection server daemon (10.0.0.1:49252). Sep 12 17:30:59.693509 systemd-logind[1448]: Removed session 4. Sep 12 17:30:59.727598 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 49252 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:30:59.729310 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:59.733246 systemd-logind[1448]: New session 5 of user core. Sep 12 17:30:59.743357 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:30:59.794167 sshd[1601]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:59.807639 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:49252.service: Deactivated successfully. Sep 12 17:30:59.810038 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:30:59.812158 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:30:59.823508 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:49256.service - OpenSSH per-connection server daemon (10.0.0.1:49256). Sep 12 17:30:59.824455 systemd-logind[1448]: Removed session 5. Sep 12 17:30:59.858003 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 49256 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:30:59.859846 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:59.864150 systemd-logind[1448]: New session 6 of user core. Sep 12 17:30:59.873355 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:30:59.929462 sshd[1608]: pam_unix(sshd:session): session closed for user core Sep 12 17:30:59.939182 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:49256.service: Deactivated successfully. Sep 12 17:30:59.941125 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:30:59.942892 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:30:59.954446 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:34068.service - OpenSSH per-connection server daemon (10.0.0.1:34068). Sep 12 17:30:59.955441 systemd-logind[1448]: Removed session 6. Sep 12 17:30:59.988116 sshd[1615]: Accepted publickey for core from 10.0.0.1 port 34068 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:30:59.989918 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:30:59.993965 systemd-logind[1448]: New session 7 of user core. Sep 12 17:31:00.009358 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:31:00.069415 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:31:00.069825 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:00.086911 sudo[1618]: pam_unix(sudo:session): session closed for user root Sep 12 17:31:00.089036 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:00.102596 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:34068.service: Deactivated successfully. Sep 12 17:31:00.105008 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:31:00.106064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:31:00.106541 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:31:00.119412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:31:00.120719 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:34070.service - OpenSSH per-connection server daemon (10.0.0.1:34070). Sep 12 17:31:00.121574 systemd-logind[1448]: Removed session 7. Sep 12 17:31:00.160447 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 34070 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:31:00.162316 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:00.166780 systemd-logind[1448]: New session 8 of user core. Sep 12 17:31:00.180495 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:31:00.238614 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:31:00.238988 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:00.243944 sudo[1630]: pam_unix(sudo:session): session closed for user root Sep 12 17:31:00.250887 sudo[1629]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:31:00.251247 sudo[1629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:00.268504 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:31:00.271489 auditctl[1633]: No rules Sep 12 17:31:00.272585 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:31:00.272872 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:31:00.275045 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:31:00.308820 augenrules[1651]: No rules Sep 12 17:31:00.310851 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:31:00.312484 sudo[1629]: pam_unix(sudo:session): session closed for user root Sep 12 17:31:00.314602 sshd[1624]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:00.326129 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:34070.service: Deactivated successfully. Sep 12 17:31:00.328723 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:31:00.329327 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:31:00.331416 systemd-logind[1448]: Removed session 8. Sep 12 17:31:00.332850 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:34076.service - OpenSSH per-connection server daemon (10.0.0.1:34076). Sep 12 17:31:00.346517 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:00.352080 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:31:00.373045 sshd[1661]: Accepted publickey for core from 10.0.0.1 port 34076 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:31:00.374454 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:00.379147 systemd-logind[1448]: New session 9 of user core. Sep 12 17:31:00.389370 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:31:00.429347 kubelet[1665]: E0912 17:31:00.429276 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:31:00.436901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:31:00.437126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:31:00.445260 sudo[1674]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:31:00.445671 sudo[1674]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:00.903464 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:31:00.903636 (dockerd)[1693]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:31:01.480593 dockerd[1693]: time="2025-09-12T17:31:01.480518485Z" level=info msg="Starting up" Sep 12 17:31:02.287446 dockerd[1693]: time="2025-09-12T17:31:02.287380663Z" level=info msg="Loading containers: start." Sep 12 17:31:02.406241 kernel: Initializing XFRM netlink socket Sep 12 17:31:02.488567 systemd-networkd[1402]: docker0: Link UP Sep 12 17:31:02.511951 dockerd[1693]: time="2025-09-12T17:31:02.511904596Z" level=info msg="Loading containers: done." Sep 12 17:31:02.526904 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck23455330-merged.mount: Deactivated successfully. Sep 12 17:31:02.528407 dockerd[1693]: time="2025-09-12T17:31:02.528363913Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:31:02.528483 dockerd[1693]: time="2025-09-12T17:31:02.528470021Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:31:02.528624 dockerd[1693]: time="2025-09-12T17:31:02.528602111Z" level=info msg="Daemon has completed initialization" Sep 12 17:31:02.567805 dockerd[1693]: time="2025-09-12T17:31:02.567614498Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:31:02.567916 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:31:03.791699 containerd[1469]: time="2025-09-12T17:31:03.791639373Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:31:04.474042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1842080237.mount: Deactivated successfully. Sep 12 17:31:06.213987 containerd[1469]: time="2025-09-12T17:31:06.213895827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:06.217518 containerd[1469]: time="2025-09-12T17:31:06.217418333Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114893" Sep 12 17:31:06.218756 containerd[1469]: time="2025-09-12T17:31:06.218697113Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:06.284379 containerd[1469]: time="2025-09-12T17:31:06.284309037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:06.285844 containerd[1469]: time="2025-09-12T17:31:06.285808590Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 2.494119572s" Sep 12 17:31:06.285899 containerd[1469]: time="2025-09-12T17:31:06.285851576Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Sep 12 17:31:06.286804 containerd[1469]: time="2025-09-12T17:31:06.286542692Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:31:08.055930 containerd[1469]: time="2025-09-12T17:31:08.055854703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:08.056688 containerd[1469]: time="2025-09-12T17:31:08.056632960Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020844" Sep 12 17:31:08.058100 containerd[1469]: time="2025-09-12T17:31:08.058040727Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:08.061489 containerd[1469]: time="2025-09-12T17:31:08.061443486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:08.062985 containerd[1469]: time="2025-09-12T17:31:08.062935006Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.776347786s" Sep 12 17:31:08.063028 containerd[1469]: time="2025-09-12T17:31:08.062980506Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Sep 12 17:31:08.063623 containerd[1469]: time="2025-09-12T17:31:08.063595384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:31:09.866079 containerd[1469]: time="2025-09-12T17:31:09.866013131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:09.866785 containerd[1469]: time="2025-09-12T17:31:09.866711598Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155568" Sep 12 17:31:09.867966 containerd[1469]: time="2025-09-12T17:31:09.867922730Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:09.870912 containerd[1469]: time="2025-09-12T17:31:09.870861294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:09.872307 containerd[1469]: time="2025-09-12T17:31:09.872256034Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.80862365s" Sep 12 17:31:09.872384 containerd[1469]: time="2025-09-12T17:31:09.872309009Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Sep 12 17:31:09.872891 containerd[1469]: time="2025-09-12T17:31:09.872863734Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:31:10.442569 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:31:10.535352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:31:10.755445 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:10.759937 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:31:11.545772 kubelet[1915]: E0912 17:31:11.545615 1915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:31:11.550675 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:31:11.550955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:31:12.528463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3403976862.mount: Deactivated successfully. Sep 12 17:31:14.146835 containerd[1469]: time="2025-09-12T17:31:14.146736811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:14.147773 containerd[1469]: time="2025-09-12T17:31:14.147696497Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929469" Sep 12 17:31:14.150761 containerd[1469]: time="2025-09-12T17:31:14.149852314Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:14.152784 containerd[1469]: time="2025-09-12T17:31:14.152740693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:14.153547 containerd[1469]: time="2025-09-12T17:31:14.153484538Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 4.280584716s" Sep 12 17:31:14.153547 containerd[1469]: time="2025-09-12T17:31:14.153542039Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Sep 12 17:31:14.154193 containerd[1469]: time="2025-09-12T17:31:14.154164330Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:31:14.774142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2923491675.mount: Deactivated successfully. Sep 12 17:31:15.621437 containerd[1469]: time="2025-09-12T17:31:15.621357186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:15.622141 containerd[1469]: time="2025-09-12T17:31:15.622090559Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 12 17:31:15.623512 containerd[1469]: time="2025-09-12T17:31:15.623479053Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:15.626765 containerd[1469]: time="2025-09-12T17:31:15.626712800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:15.628087 containerd[1469]: time="2025-09-12T17:31:15.628044697Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.473846657s" Sep 12 17:31:15.628139 containerd[1469]: time="2025-09-12T17:31:15.628084169Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 12 17:31:15.628765 containerd[1469]: time="2025-09-12T17:31:15.628738238Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:31:16.157993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount170074001.mount: Deactivated successfully. Sep 12 17:31:16.165727 containerd[1469]: time="2025-09-12T17:31:16.165671199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:16.166972 containerd[1469]: time="2025-09-12T17:31:16.166910768Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:31:16.183799 containerd[1469]: time="2025-09-12T17:31:16.183761529Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:16.250712 containerd[1469]: time="2025-09-12T17:31:16.250605310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:16.251938 containerd[1469]: time="2025-09-12T17:31:16.251871925Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 623.092761ms" Sep 12 17:31:16.251938 containerd[1469]: time="2025-09-12T17:31:16.251930626Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:31:16.252872 containerd[1469]: time="2025-09-12T17:31:16.252839032Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:31:16.905137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4135803380.mount: Deactivated successfully. Sep 12 17:31:20.337694 containerd[1469]: time="2025-09-12T17:31:20.337474502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:20.338850 containerd[1469]: time="2025-09-12T17:31:20.338800171Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378433" Sep 12 17:31:20.340497 containerd[1469]: time="2025-09-12T17:31:20.340462673Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:20.344275 containerd[1469]: time="2025-09-12T17:31:20.344202684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:20.345369 containerd[1469]: time="2025-09-12T17:31:20.345322127Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.092449767s" Sep 12 17:31:20.345369 containerd[1469]: time="2025-09-12T17:31:20.345356065Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 12 17:31:21.692556 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 17:31:21.701397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:31:21.887164 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:21.892684 (kubelet)[2074]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:31:21.942524 kubelet[2074]: E0912 17:31:21.942383 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:31:21.947852 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:31:21.948123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:31:22.733955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:22.749490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:31:22.776561 systemd[1]: Reloading requested from client PID 2090 ('systemctl') (unit session-9.scope)... Sep 12 17:31:22.776585 systemd[1]: Reloading... Sep 12 17:31:22.862267 zram_generator::config[2132]: No configuration found. Sep 12 17:31:23.790074 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:31:23.892531 systemd[1]: Reloading finished in 1115 ms. Sep 12 17:31:23.944987 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:31:23.945118 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:31:23.945665 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:23.947540 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:31:24.129829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:24.151639 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:31:24.200248 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:31:24.200248 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:31:24.200248 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:31:24.200248 kubelet[2178]: I0912 17:31:24.199570 2178 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:31:24.846839 kubelet[2178]: I0912 17:31:24.846771 2178 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:31:24.846839 kubelet[2178]: I0912 17:31:24.846805 2178 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:31:24.847057 kubelet[2178]: I0912 17:31:24.847022 2178 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:31:24.933197 kubelet[2178]: I0912 17:31:24.933128 2178 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:31:24.938027 kubelet[2178]: E0912 17:31:24.937994 2178 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:31:24.960500 kubelet[2178]: E0912 17:31:24.960415 2178 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:31:24.960500 kubelet[2178]: I0912 17:31:24.960465 2178 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:31:24.966733 kubelet[2178]: I0912 17:31:24.966686 2178 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:31:24.967054 kubelet[2178]: I0912 17:31:24.967016 2178 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:31:24.967264 kubelet[2178]: I0912 17:31:24.967043 2178 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:31:24.967387 kubelet[2178]: I0912 17:31:24.967274 2178 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:31:24.967387 kubelet[2178]: I0912 17:31:24.967289 2178 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:31:24.968334 kubelet[2178]: I0912 17:31:24.968301 2178 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:31:24.971868 kubelet[2178]: I0912 17:31:24.971835 2178 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:31:24.971868 kubelet[2178]: I0912 17:31:24.971864 2178 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:31:24.971934 kubelet[2178]: I0912 17:31:24.971903 2178 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:31:24.971934 kubelet[2178]: I0912 17:31:24.971919 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:31:24.988269 kubelet[2178]: E0912 17:31:24.988203 2178 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:31:24.988384 kubelet[2178]: E0912 17:31:24.988303 2178 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:31:24.988384 kubelet[2178]: I0912 17:31:24.988339 2178 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:31:24.988920 kubelet[2178]: I0912 17:31:24.988878 2178 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:31:24.989769 kubelet[2178]: W0912 17:31:24.989747 2178 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:31:24.993048 kubelet[2178]: I0912 17:31:24.993028 2178 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:31:24.993105 kubelet[2178]: I0912 17:31:24.993091 2178 server.go:1289] "Started kubelet" Sep 12 17:31:24.993898 kubelet[2178]: I0912 17:31:24.993835 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:31:24.994346 kubelet[2178]: I0912 17:31:24.994322 2178 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:31:24.997818 kubelet[2178]: I0912 17:31:24.997770 2178 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:31:24.998855 kubelet[2178]: I0912 17:31:24.998830 2178 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:31:24.999256 kubelet[2178]: I0912 17:31:24.999237 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:31:25.001768 kubelet[2178]: I0912 17:31:25.000708 2178 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:31:25.001768 kubelet[2178]: I0912 17:31:25.000947 2178 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:31:25.001768 kubelet[2178]: E0912 17:31:25.001117 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:25.001871 kubelet[2178]: I0912 17:31:25.001833 2178 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:31:25.001901 kubelet[2178]: I0912 17:31:25.001891 2178 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:31:25.004916 kubelet[2178]: E0912 17:31:25.004867 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" Sep 12 17:31:25.005274 kubelet[2178]: I0912 17:31:25.005238 2178 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:31:25.005390 kubelet[2178]: I0912 17:31:25.005317 2178 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:31:25.006587 kubelet[2178]: E0912 17:31:25.006396 2178 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:31:25.006794 kubelet[2178]: E0912 17:31:25.006772 2178 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:31:25.006839 kubelet[2178]: I0912 17:31:25.006794 2178 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:31:25.024424 kubelet[2178]: I0912 17:31:25.023337 2178 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:31:25.026883 kubelet[2178]: I0912 17:31:25.026866 2178 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:31:25.027121 kubelet[2178]: I0912 17:31:25.026964 2178 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:31:25.027121 kubelet[2178]: I0912 17:31:25.026989 2178 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:31:25.027121 kubelet[2178]: I0912 17:31:25.027026 2178 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:31:25.027121 kubelet[2178]: E0912 17:31:25.027072 2178 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:31:25.030292 kubelet[2178]: I0912 17:31:25.030275 2178 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:31:25.030292 kubelet[2178]: I0912 17:31:25.030288 2178 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:31:25.030372 kubelet[2178]: I0912 17:31:25.030308 2178 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:31:25.101844 kubelet[2178]: E0912 17:31:25.101729 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:25.128086 kubelet[2178]: E0912 17:31:25.128032 2178 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:31:25.202308 kubelet[2178]: E0912 17:31:25.202266 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:25.205951 kubelet[2178]: E0912 17:31:25.205902 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" Sep 12 17:31:25.303250 kubelet[2178]: E0912 17:31:25.303177 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:25.328491 kubelet[2178]: E0912 17:31:25.328446 2178 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:31:25.403963 kubelet[2178]: E0912 17:31:25.403711 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:25.504628 kubelet[2178]: E0912 17:31:25.504509 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:25.605165 kubelet[2178]: E0912 17:31:25.605057 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:25.606597 kubelet[2178]: E0912 17:31:25.606544 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" Sep 12 17:31:25.706114 kubelet[2178]: E0912 17:31:25.706048 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:25.729377 kubelet[2178]: E0912 17:31:25.729308 2178 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:31:25.806736 kubelet[2178]: E0912 17:31:25.806687 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:25.907199 kubelet[2178]: E0912 17:31:25.907146 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:26.002318 kubelet[2178]: E0912 17:31:26.002050 2178 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:31:26.007382 kubelet[2178]: E0912 17:31:26.007323 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:26.078311 kubelet[2178]: E0912 17:31:26.078180 2178 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:31:26.107835 kubelet[2178]: E0912 17:31:26.107728 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:26.208444 kubelet[2178]: E0912 17:31:26.208365 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:26.309154 kubelet[2178]: E0912 17:31:26.309013 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:26.375313 kubelet[2178]: E0912 17:31:26.375206 2178 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:31:26.379314 kubelet[2178]: E0912 17:31:26.378136 2178 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186499470b530e81 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:31:24.993056385 +0000 UTC m=+0.836138953,LastTimestamp:2025-09-12 17:31:24.993056385 +0000 UTC m=+0.836138953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:31:26.379744 kubelet[2178]: E0912 17:31:26.379641 2178 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:31:26.408031 kubelet[2178]: E0912 17:31:26.407897 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="1.6s" Sep 12 17:31:26.409993 kubelet[2178]: E0912 17:31:26.409927 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:26.462605 kubelet[2178]: I0912 17:31:26.462518 2178 policy_none.go:49] "None policy: Start" Sep 12 17:31:26.462825 kubelet[2178]: I0912 17:31:26.462637 2178 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:31:26.462825 kubelet[2178]: I0912 17:31:26.462660 2178 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:31:26.510166 kubelet[2178]: E0912 17:31:26.510093 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:26.530526 kubelet[2178]: E0912 17:31:26.530449 2178 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:31:26.610908 kubelet[2178]: E0912 17:31:26.610775 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:26.616881 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:31:26.642304 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:31:26.646317 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:31:26.661618 kubelet[2178]: E0912 17:31:26.661561 2178 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:31:26.662087 kubelet[2178]: I0912 17:31:26.661871 2178 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:31:26.662087 kubelet[2178]: I0912 17:31:26.661928 2178 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:31:26.662348 kubelet[2178]: I0912 17:31:26.662334 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:31:26.663206 kubelet[2178]: E0912 17:31:26.663185 2178 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:31:26.663292 kubelet[2178]: E0912 17:31:26.663239 2178 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:31:26.764758 kubelet[2178]: I0912 17:31:26.764695 2178 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:31:26.765375 kubelet[2178]: E0912 17:31:26.765316 2178 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 12 17:31:26.967392 kubelet[2178]: I0912 17:31:26.967346 2178 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:31:26.967999 kubelet[2178]: E0912 17:31:26.967938 2178 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 12 17:31:27.093776 kubelet[2178]: E0912 17:31:27.093727 2178 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:31:27.370005 kubelet[2178]: I0912 17:31:27.369847 2178 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:31:27.370475 kubelet[2178]: E0912 17:31:27.370344 2178 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 12 17:31:27.783649 kubelet[2178]: E0912 17:31:27.783531 2178 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:31:27.885345 kubelet[2178]: E0912 17:31:27.885278 2178 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:31:28.008940 kubelet[2178]: E0912 17:31:28.008875 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="3.2s" Sep 12 17:31:28.143490 systemd[1]: Created slice kubepods-burstable-pod33195d8ce51232a80f331d80828e16f3.slice - libcontainer container kubepods-burstable-pod33195d8ce51232a80f331d80828e16f3.slice. Sep 12 17:31:28.162476 kubelet[2178]: E0912 17:31:28.162432 2178 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:31:28.166254 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 17:31:28.167834 kubelet[2178]: E0912 17:31:28.167815 2178 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:31:28.169492 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 17:31:28.172333 kubelet[2178]: I0912 17:31:28.172312 2178 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:31:28.172745 kubelet[2178]: E0912 17:31:28.172693 2178 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" Sep 12 17:31:28.173948 kubelet[2178]: E0912 17:31:28.173915 2178 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:31:28.221148 kubelet[2178]: I0912 17:31:28.221117 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33195d8ce51232a80f331d80828e16f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"33195d8ce51232a80f331d80828e16f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:31:28.221203 kubelet[2178]: I0912 17:31:28.221182 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:28.221286 kubelet[2178]: I0912 17:31:28.221250 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:28.221317 kubelet[2178]: I0912 17:31:28.221290 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:28.221317 kubelet[2178]: I0912 17:31:28.221308 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:28.221385 kubelet[2178]: I0912 17:31:28.221364 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33195d8ce51232a80f331d80828e16f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"33195d8ce51232a80f331d80828e16f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:31:28.221409 kubelet[2178]: I0912 17:31:28.221385 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33195d8ce51232a80f331d80828e16f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"33195d8ce51232a80f331d80828e16f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:31:28.221552 kubelet[2178]: I0912 17:31:28.221492 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:28.221771 kubelet[2178]: I0912 17:31:28.221571 2178 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:31:28.463374 kubelet[2178]: E0912 17:31:28.463304 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:28.464145 containerd[1469]: time="2025-09-12T17:31:28.464109238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:33195d8ce51232a80f331d80828e16f3,Namespace:kube-system,Attempt:0,}" Sep 12 17:31:28.469360 kubelet[2178]: E0912 17:31:28.469333 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:28.469763 containerd[1469]: time="2025-09-12T17:31:28.469718848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 17:31:28.475279 kubelet[2178]: E0912 17:31:28.475253 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:28.475680 containerd[1469]: time="2025-09-12T17:31:28.475626685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 17:31:28.920308 kubelet[2178]: E0912 17:31:28.920134 2178 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:31:28.945629 kubelet[2178]: E0912 17:31:28.945556 2178 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:31:28.973023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3102156683.mount: Deactivated successfully. Sep 12 17:31:28.980508 containerd[1469]: time="2025-09-12T17:31:28.980445574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:31:28.982380 containerd[1469]: time="2025-09-12T17:31:28.982337127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:31:28.983350 containerd[1469]: time="2025-09-12T17:31:28.983325608Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:31:28.984387 containerd[1469]: time="2025-09-12T17:31:28.984334979Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:31:28.985399 containerd[1469]: time="2025-09-12T17:31:28.985357766Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:31:28.986175 containerd[1469]: time="2025-09-12T17:31:28.986117928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:31:28.987079 containerd[1469]: time="2025-09-12T17:31:28.987028505Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:31:28.988609 containerd[1469]: time="2025-09-12T17:31:28.988573448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:31:28.990704 containerd[1469]: time="2025-09-12T17:31:28.990676095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 526.488503ms" Sep 12 17:31:28.991576 containerd[1469]: time="2025-09-12T17:31:28.991522176Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.75336ms" Sep 12 17:31:28.997484 containerd[1469]: time="2025-09-12T17:31:28.997432376Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 521.711858ms" Sep 12 17:31:29.302172 containerd[1469]: time="2025-09-12T17:31:29.301997185Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:29.302172 containerd[1469]: time="2025-09-12T17:31:29.302061392Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:29.302172 containerd[1469]: time="2025-09-12T17:31:29.302072584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:29.302172 containerd[1469]: time="2025-09-12T17:31:29.301598125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:29.302172 containerd[1469]: time="2025-09-12T17:31:29.301691106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:29.302172 containerd[1469]: time="2025-09-12T17:31:29.301708060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:29.302172 containerd[1469]: time="2025-09-12T17:31:29.301855088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:29.303375 containerd[1469]: time="2025-09-12T17:31:29.303103301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:29.303375 containerd[1469]: time="2025-09-12T17:31:29.303263314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:29.303598 containerd[1469]: time="2025-09-12T17:31:29.303392117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:29.303598 containerd[1469]: time="2025-09-12T17:31:29.302174282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:29.309268 containerd[1469]: time="2025-09-12T17:31:29.306612429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:29.405380 systemd[1]: Started cri-containerd-6a417d38ff61f4f9d16ae1390876aa48ce3caa60c90ffed1d7352cc12463484e.scope - libcontainer container 6a417d38ff61f4f9d16ae1390876aa48ce3caa60c90ffed1d7352cc12463484e. Sep 12 17:31:29.411972 systemd[1]: Started cri-containerd-f458985a2d7aceedd1d79667ff1515365f31a634c4a8115be602c20a36f326db.scope - libcontainer container f458985a2d7aceedd1d79667ff1515365f31a634c4a8115be602c20a36f326db. Sep 12 17:31:29.416490 systemd[1]: Started cri-containerd-2c3424e435ef0d5162e4f8e2e5ae897105abd2b7cf818d16ebc5c37aa204f0d2.scope - libcontainer container 2c3424e435ef0d5162e4f8e2e5ae897105abd2b7cf818d16ebc5c37aa204f0d2. Sep 12 17:31:29.470183 containerd[1469]: time="2025-09-12T17:31:29.470074166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f458985a2d7aceedd1d79667ff1515365f31a634c4a8115be602c20a36f326db\"" Sep 12 17:31:29.471745 kubelet[2178]: E0912 17:31:29.471701 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:29.474993 containerd[1469]: time="2025-09-12T17:31:29.474934309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a417d38ff61f4f9d16ae1390876aa48ce3caa60c90ffed1d7352cc12463484e\"" Sep 12 17:31:29.476195 kubelet[2178]: E0912 17:31:29.476157 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:29.478625 containerd[1469]: time="2025-09-12T17:31:29.478590175Z" level=info msg="CreateContainer within sandbox \"f458985a2d7aceedd1d79667ff1515365f31a634c4a8115be602c20a36f326db\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:31:29.479054 containerd[1469]: time="2025-09-12T17:31:29.478957063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:33195d8ce51232a80f331d80828e16f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c3424e435ef0d5162e4f8e2e5ae897105abd2b7cf818d16ebc5c37aa204f0d2\"" Sep 12 17:31:29.480281 kubelet[2178]: E0912 17:31:29.480257 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:29.480337 containerd[1469]: time="2025-09-12T17:31:29.480260013Z" level=info msg="CreateContainer within sandbox \"6a417d38ff61f4f9d16ae1390876aa48ce3caa60c90ffed1d7352cc12463484e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:31:29.485059 containerd[1469]: time="2025-09-12T17:31:29.484964732Z" level=info msg="CreateContainer within sandbox \"2c3424e435ef0d5162e4f8e2e5ae897105abd2b7cf818d16ebc5c37aa204f0d2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:31:29.499268 containerd[1469]: time="2025-09-12T17:31:29.499230206Z" level=info msg="CreateContainer within sandbox \"f458985a2d7aceedd1d79667ff1515365f31a634c4a8115be602c20a36f326db\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"68505d20fa2502a64631d576f9589d6afe0a37494dc10131c422e22a04e92cd7\"" Sep 12 17:31:29.499899 containerd[1469]: time="2025-09-12T17:31:29.499869116Z" level=info msg="StartContainer for \"68505d20fa2502a64631d576f9589d6afe0a37494dc10131c422e22a04e92cd7\"" Sep 12 17:31:29.503994 containerd[1469]: time="2025-09-12T17:31:29.503957478Z" level=info msg="CreateContainer within sandbox \"6a417d38ff61f4f9d16ae1390876aa48ce3caa60c90ffed1d7352cc12463484e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"42c6ac3c85ee0d6e50f720d1474de698f822961ed1fd35003b92be770c229c8a\"" Sep 12 17:31:29.504711 containerd[1469]: time="2025-09-12T17:31:29.504690924Z" level=info msg="StartContainer for \"42c6ac3c85ee0d6e50f720d1474de698f822961ed1fd35003b92be770c229c8a\"" Sep 12 17:31:29.508513 containerd[1469]: time="2025-09-12T17:31:29.508457366Z" level=info msg="CreateContainer within sandbox \"2c3424e435ef0d5162e4f8e2e5ae897105abd2b7cf818d16ebc5c37aa204f0d2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"be82c20722bc988d81554cd1bb0987268bf99a384bcbf2f9cee9460f43692d80\"" Sep 12 17:31:29.509901 containerd[1469]: time="2025-09-12T17:31:29.508908088Z" level=info msg="StartContainer for \"be82c20722bc988d81554cd1bb0987268bf99a384bcbf2f9cee9460f43692d80\"" Sep 12 17:31:29.545460 systemd[1]: Started cri-containerd-68505d20fa2502a64631d576f9589d6afe0a37494dc10131c422e22a04e92cd7.scope - libcontainer container 68505d20fa2502a64631d576f9589d6afe0a37494dc10131c422e22a04e92cd7. Sep 12 17:31:29.549138 systemd[1]: Started cri-containerd-42c6ac3c85ee0d6e50f720d1474de698f822961ed1fd35003b92be770c229c8a.scope - libcontainer container 42c6ac3c85ee0d6e50f720d1474de698f822961ed1fd35003b92be770c229c8a. Sep 12 17:31:29.582514 systemd[1]: Started cri-containerd-be82c20722bc988d81554cd1bb0987268bf99a384bcbf2f9cee9460f43692d80.scope - libcontainer container be82c20722bc988d81554cd1bb0987268bf99a384bcbf2f9cee9460f43692d80. Sep 12 17:31:29.651828 containerd[1469]: time="2025-09-12T17:31:29.651733722Z" level=info msg="StartContainer for \"42c6ac3c85ee0d6e50f720d1474de698f822961ed1fd35003b92be770c229c8a\" returns successfully" Sep 12 17:31:29.652268 containerd[1469]: time="2025-09-12T17:31:29.652194364Z" level=info msg="StartContainer for \"68505d20fa2502a64631d576f9589d6afe0a37494dc10131c422e22a04e92cd7\" returns successfully" Sep 12 17:31:29.652472 containerd[1469]: time="2025-09-12T17:31:29.652291555Z" level=info msg="StartContainer for \"be82c20722bc988d81554cd1bb0987268bf99a384bcbf2f9cee9460f43692d80\" returns successfully" Sep 12 17:31:29.775454 kubelet[2178]: I0912 17:31:29.775085 2178 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:31:30.045301 kubelet[2178]: E0912 17:31:30.045149 2178 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:31:30.045448 kubelet[2178]: E0912 17:31:30.045323 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:30.051271 kubelet[2178]: E0912 17:31:30.050750 2178 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:31:30.051271 kubelet[2178]: E0912 17:31:30.050904 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:30.051891 kubelet[2178]: E0912 17:31:30.051677 2178 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:31:30.051891 kubelet[2178]: E0912 17:31:30.051812 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:31.051477 kubelet[2178]: E0912 17:31:31.051430 2178 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:31:31.051904 kubelet[2178]: E0912 17:31:31.051522 2178 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:31:31.051904 kubelet[2178]: E0912 17:31:31.051553 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:31.051904 kubelet[2178]: E0912 17:31:31.051623 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:31.105178 kubelet[2178]: I0912 17:31:31.105107 2178 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:31:31.105178 kubelet[2178]: E0912 17:31:31.105167 2178 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:31:31.114065 kubelet[2178]: E0912 17:31:31.114035 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:31.214923 kubelet[2178]: E0912 17:31:31.214796 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:31.315847 kubelet[2178]: E0912 17:31:31.315686 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:31.366101 update_engine[1451]: I20250912 17:31:31.365994 1451 update_attempter.cc:509] Updating boot flags... Sep 12 17:31:31.416336 kubelet[2178]: E0912 17:31:31.416291 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:31.454374 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2470) Sep 12 17:31:31.496597 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2472) Sep 12 17:31:31.517256 kubelet[2178]: E0912 17:31:31.517202 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:31.617759 kubelet[2178]: E0912 17:31:31.617658 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:31.718311 kubelet[2178]: E0912 17:31:31.718253 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:31.818975 kubelet[2178]: E0912 17:31:31.818909 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:31.919820 kubelet[2178]: E0912 17:31:31.919633 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:32.020034 kubelet[2178]: E0912 17:31:32.019962 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:32.053187 kubelet[2178]: E0912 17:31:32.053129 2178 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:31:32.053668 kubelet[2178]: E0912 17:31:32.053354 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:32.120574 kubelet[2178]: E0912 17:31:32.120499 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:32.221178 kubelet[2178]: E0912 17:31:32.221121 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:32.321807 kubelet[2178]: E0912 17:31:32.321748 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:32.422404 kubelet[2178]: E0912 17:31:32.422345 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:32.523294 kubelet[2178]: E0912 17:31:32.523064 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:32.623642 kubelet[2178]: E0912 17:31:32.623590 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:32.724340 kubelet[2178]: E0912 17:31:32.724309 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:32.825062 kubelet[2178]: E0912 17:31:32.824908 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:32.925595 kubelet[2178]: E0912 17:31:32.925528 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:33.026273 kubelet[2178]: E0912 17:31:33.026162 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:33.126513 kubelet[2178]: E0912 17:31:33.126329 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:33.177780 systemd[1]: Reloading requested from client PID 2478 ('systemctl') (unit session-9.scope)... Sep 12 17:31:33.177805 systemd[1]: Reloading... Sep 12 17:31:33.227106 kubelet[2178]: E0912 17:31:33.227053 2178 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:31:33.277269 zram_generator::config[2520]: No configuration found. Sep 12 17:31:33.302671 kubelet[2178]: I0912 17:31:33.302614 2178 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:31:33.313005 kubelet[2178]: I0912 17:31:33.312956 2178 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:33.319693 kubelet[2178]: I0912 17:31:33.319655 2178 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:31:33.399159 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:31:33.492951 systemd[1]: Reloading finished in 314 ms. Sep 12 17:31:33.545331 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:31:33.570037 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:31:33.570501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:33.570587 systemd[1]: kubelet.service: Consumed 1.358s CPU time, 132.0M memory peak, 0B memory swap peak. Sep 12 17:31:33.582577 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:31:33.772936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:33.780540 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:31:33.830972 kubelet[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:31:33.830972 kubelet[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:31:33.830972 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:31:33.831453 kubelet[2562]: I0912 17:31:33.831032 2562 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:31:33.839328 kubelet[2562]: I0912 17:31:33.839277 2562 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:31:33.839328 kubelet[2562]: I0912 17:31:33.839321 2562 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:31:33.839611 kubelet[2562]: I0912 17:31:33.839583 2562 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:31:33.841039 kubelet[2562]: I0912 17:31:33.841011 2562 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:31:33.843961 kubelet[2562]: I0912 17:31:33.843918 2562 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:31:33.849780 kubelet[2562]: E0912 17:31:33.849749 2562 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:31:33.849873 kubelet[2562]: I0912 17:31:33.849843 2562 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:31:33.855243 kubelet[2562]: I0912 17:31:33.855181 2562 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:31:33.855514 kubelet[2562]: I0912 17:31:33.855477 2562 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:31:33.855675 kubelet[2562]: I0912 17:31:33.855506 2562 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:31:33.855767 kubelet[2562]: I0912 17:31:33.855683 2562 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:31:33.855767 kubelet[2562]: I0912 17:31:33.855693 2562 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:31:33.855767 kubelet[2562]: I0912 17:31:33.855752 2562 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:31:33.855976 kubelet[2562]: I0912 17:31:33.855959 2562 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:31:33.856006 kubelet[2562]: I0912 17:31:33.855977 2562 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:31:33.856006 kubelet[2562]: I0912 17:31:33.856004 2562 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:31:33.856052 kubelet[2562]: I0912 17:31:33.856024 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:31:33.857085 kubelet[2562]: I0912 17:31:33.857054 2562 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:31:33.857969 kubelet[2562]: I0912 17:31:33.857770 2562 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:31:33.866554 kubelet[2562]: I0912 17:31:33.866523 2562 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:31:33.866617 kubelet[2562]: I0912 17:31:33.866585 2562 server.go:1289] "Started kubelet" Sep 12 17:31:33.866851 kubelet[2562]: I0912 17:31:33.866816 2562 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:31:33.867249 kubelet[2562]: I0912 17:31:33.866933 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:31:33.867346 kubelet[2562]: I0912 17:31:33.867328 2562 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:31:33.869041 kubelet[2562]: I0912 17:31:33.869005 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:31:33.869041 kubelet[2562]: I0912 17:31:33.869035 2562 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:31:33.871974 kubelet[2562]: I0912 17:31:33.871936 2562 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:31:33.874441 kubelet[2562]: I0912 17:31:33.874406 2562 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:31:33.874602 kubelet[2562]: I0912 17:31:33.874519 2562 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:31:33.874732 kubelet[2562]: I0912 17:31:33.874680 2562 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:31:33.877952 kubelet[2562]: E0912 17:31:33.877913 2562 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:31:33.879794 kubelet[2562]: I0912 17:31:33.879745 2562 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:31:33.879794 kubelet[2562]: I0912 17:31:33.879781 2562 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:31:33.879900 kubelet[2562]: I0912 17:31:33.879876 2562 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:31:33.890268 kubelet[2562]: I0912 17:31:33.890171 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:31:33.893062 kubelet[2562]: I0912 17:31:33.891469 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:31:33.893062 kubelet[2562]: I0912 17:31:33.891502 2562 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:31:33.893062 kubelet[2562]: I0912 17:31:33.891528 2562 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:31:33.893062 kubelet[2562]: I0912 17:31:33.891537 2562 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:31:33.893062 kubelet[2562]: E0912 17:31:33.891578 2562 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:31:33.915723 kubelet[2562]: I0912 17:31:33.915692 2562 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:31:33.915723 kubelet[2562]: I0912 17:31:33.915708 2562 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:31:33.915723 kubelet[2562]: I0912 17:31:33.915727 2562 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:31:33.915902 kubelet[2562]: I0912 17:31:33.915874 2562 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:31:33.915902 kubelet[2562]: I0912 17:31:33.915887 2562 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:31:33.915902 kubelet[2562]: I0912 17:31:33.915903 2562 policy_none.go:49] "None policy: Start" Sep 12 17:31:33.915973 kubelet[2562]: I0912 17:31:33.915913 2562 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:31:33.915973 kubelet[2562]: I0912 17:31:33.915924 2562 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:31:33.916030 kubelet[2562]: I0912 17:31:33.916014 2562 state_mem.go:75] "Updated machine memory state" Sep 12 17:31:33.920393 kubelet[2562]: E0912 17:31:33.920357 2562 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:31:33.920638 kubelet[2562]: I0912 17:31:33.920614 2562 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:31:33.920687 kubelet[2562]: I0912 17:31:33.920636 2562 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:31:33.920956 kubelet[2562]: I0912 17:31:33.920872 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:31:33.921722 kubelet[2562]: E0912 17:31:33.921697 2562 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:31:33.992828 kubelet[2562]: I0912 17:31:33.992751 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:31:33.993309 kubelet[2562]: I0912 17:31:33.993020 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:33.993309 kubelet[2562]: I0912 17:31:33.993052 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:31:34.001534 kubelet[2562]: E0912 17:31:34.001481 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:34.001702 kubelet[2562]: E0912 17:31:34.001580 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:31:34.001702 kubelet[2562]: E0912 17:31:34.001598 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:31:34.031550 kubelet[2562]: I0912 17:31:34.031370 2562 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:31:34.038260 kubelet[2562]: I0912 17:31:34.038184 2562 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:31:34.038406 kubelet[2562]: I0912 17:31:34.038295 2562 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:31:34.176422 kubelet[2562]: I0912 17:31:34.176358 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/33195d8ce51232a80f331d80828e16f3-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"33195d8ce51232a80f331d80828e16f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:31:34.176422 kubelet[2562]: I0912 17:31:34.176401 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:34.176422 kubelet[2562]: I0912 17:31:34.176423 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:34.176422 kubelet[2562]: I0912 17:31:34.176437 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:34.176717 kubelet[2562]: I0912 17:31:34.176535 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:34.176717 kubelet[2562]: I0912 17:31:34.176641 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/33195d8ce51232a80f331d80828e16f3-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"33195d8ce51232a80f331d80828e16f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:31:34.176791 kubelet[2562]: I0912 17:31:34.176712 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/33195d8ce51232a80f331d80828e16f3-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"33195d8ce51232a80f331d80828e16f3\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:31:34.176791 kubelet[2562]: I0912 17:31:34.176753 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:31:34.176858 kubelet[2562]: I0912 17:31:34.176790 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:31:34.302228 kubelet[2562]: E0912 17:31:34.302090 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:34.302571 kubelet[2562]: E0912 17:31:34.302090 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:34.302571 kubelet[2562]: E0912 17:31:34.302118 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:34.857047 kubelet[2562]: I0912 17:31:34.856997 2562 apiserver.go:52] "Watching apiserver" Sep 12 17:31:34.875152 kubelet[2562]: I0912 17:31:34.875119 2562 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:31:34.906269 kubelet[2562]: E0912 17:31:34.906156 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:34.906410 kubelet[2562]: I0912 17:31:34.906309 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:31:34.907284 kubelet[2562]: I0912 17:31:34.906471 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:31:34.948140 kubelet[2562]: E0912 17:31:34.948063 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:31:34.948481 kubelet[2562]: E0912 17:31:34.948433 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:34.948665 kubelet[2562]: E0912 17:31:34.948542 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:31:34.948665 kubelet[2562]: E0912 17:31:34.948635 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:34.975407 kubelet[2562]: I0912 17:31:34.975326 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.97530099 podStartE2EDuration="1.97530099s" podCreationTimestamp="2025-09-12 17:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:31:34.9654289 +0000 UTC m=+1.179716533" watchObservedRunningTime="2025-09-12 17:31:34.97530099 +0000 UTC m=+1.189588623" Sep 12 17:31:34.975642 kubelet[2562]: I0912 17:31:34.975441 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9754372550000001 podStartE2EDuration="1.975437255s" podCreationTimestamp="2025-09-12 17:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:31:34.975108198 +0000 UTC m=+1.189395831" watchObservedRunningTime="2025-09-12 17:31:34.975437255 +0000 UTC m=+1.189724888" Sep 12 17:31:34.996600 kubelet[2562]: I0912 17:31:34.996533 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.996515719 podStartE2EDuration="1.996515719s" podCreationTimestamp="2025-09-12 17:31:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:31:34.984669598 +0000 UTC m=+1.198957231" watchObservedRunningTime="2025-09-12 17:31:34.996515719 +0000 UTC m=+1.210803352" Sep 12 17:31:35.908051 kubelet[2562]: E0912 17:31:35.907995 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:35.908051 kubelet[2562]: E0912 17:31:35.908055 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:36.909136 kubelet[2562]: E0912 17:31:36.909091 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:37.910903 kubelet[2562]: E0912 17:31:37.910836 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:37.983110 kubelet[2562]: E0912 17:31:37.983075 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:39.486562 kubelet[2562]: I0912 17:31:39.486506 2562 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:31:39.487087 containerd[1469]: time="2025-09-12T17:31:39.486942936Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:31:39.487409 kubelet[2562]: I0912 17:31:39.487188 2562 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:31:40.431659 systemd[1]: Created slice kubepods-besteffort-pod5e194ebe_77c2_428c_a61f_7bb1db943654.slice - libcontainer container kubepods-besteffort-pod5e194ebe_77c2_428c_a61f_7bb1db943654.slice. Sep 12 17:31:40.507370 systemd[1]: Created slice kubepods-besteffort-pod12725823_4303_492d_ad04_9a406011c76e.slice - libcontainer container kubepods-besteffort-pod12725823_4303_492d_ad04_9a406011c76e.slice. Sep 12 17:31:40.510661 kubelet[2562]: I0912 17:31:40.510614 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e194ebe-77c2-428c-a61f-7bb1db943654-kube-proxy\") pod \"kube-proxy-679mc\" (UID: \"5e194ebe-77c2-428c-a61f-7bb1db943654\") " pod="kube-system/kube-proxy-679mc" Sep 12 17:31:40.511053 kubelet[2562]: I0912 17:31:40.510685 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e194ebe-77c2-428c-a61f-7bb1db943654-xtables-lock\") pod \"kube-proxy-679mc\" (UID: \"5e194ebe-77c2-428c-a61f-7bb1db943654\") " pod="kube-system/kube-proxy-679mc" Sep 12 17:31:40.511053 kubelet[2562]: I0912 17:31:40.510704 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e194ebe-77c2-428c-a61f-7bb1db943654-lib-modules\") pod \"kube-proxy-679mc\" (UID: \"5e194ebe-77c2-428c-a61f-7bb1db943654\") " pod="kube-system/kube-proxy-679mc" Sep 12 17:31:40.511053 kubelet[2562]: I0912 17:31:40.510726 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/12725823-4303-492d-ad04-9a406011c76e-var-lib-calico\") pod \"tigera-operator-755d956888-6dqmv\" (UID: \"12725823-4303-492d-ad04-9a406011c76e\") " pod="tigera-operator/tigera-operator-755d956888-6dqmv" Sep 12 17:31:40.511053 kubelet[2562]: I0912 17:31:40.510991 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqlnt\" (UniqueName: \"kubernetes.io/projected/12725823-4303-492d-ad04-9a406011c76e-kube-api-access-pqlnt\") pod \"tigera-operator-755d956888-6dqmv\" (UID: \"12725823-4303-492d-ad04-9a406011c76e\") " pod="tigera-operator/tigera-operator-755d956888-6dqmv" Sep 12 17:31:40.511255 kubelet[2562]: I0912 17:31:40.511154 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjwm4\" (UniqueName: \"kubernetes.io/projected/5e194ebe-77c2-428c-a61f-7bb1db943654-kube-api-access-cjwm4\") pod \"kube-proxy-679mc\" (UID: \"5e194ebe-77c2-428c-a61f-7bb1db943654\") " pod="kube-system/kube-proxy-679mc" Sep 12 17:31:40.738167 kubelet[2562]: E0912 17:31:40.738107 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:40.738837 containerd[1469]: time="2025-09-12T17:31:40.738769635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-679mc,Uid:5e194ebe-77c2-428c-a61f-7bb1db943654,Namespace:kube-system,Attempt:0,}" Sep 12 17:31:40.766283 containerd[1469]: time="2025-09-12T17:31:40.766165912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:40.766283 containerd[1469]: time="2025-09-12T17:31:40.766261746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:40.766477 containerd[1469]: time="2025-09-12T17:31:40.766277717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:40.766477 containerd[1469]: time="2025-09-12T17:31:40.766374202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:40.791398 systemd[1]: Started cri-containerd-bba91ea513fb93bdcc355fde9a936c3e209fe3135b3e58611a0c058032fe4f6e.scope - libcontainer container bba91ea513fb93bdcc355fde9a936c3e209fe3135b3e58611a0c058032fe4f6e. Sep 12 17:31:40.817114 containerd[1469]: time="2025-09-12T17:31:40.817062564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6dqmv,Uid:12725823-4303-492d-ad04-9a406011c76e,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:31:40.820400 containerd[1469]: time="2025-09-12T17:31:40.820106836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-679mc,Uid:5e194ebe-77c2-428c-a61f-7bb1db943654,Namespace:kube-system,Attempt:0,} returns sandbox id \"bba91ea513fb93bdcc355fde9a936c3e209fe3135b3e58611a0c058032fe4f6e\"" Sep 12 17:31:40.821113 kubelet[2562]: E0912 17:31:40.821086 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:40.826798 containerd[1469]: time="2025-09-12T17:31:40.826763519Z" level=info msg="CreateContainer within sandbox \"bba91ea513fb93bdcc355fde9a936c3e209fe3135b3e58611a0c058032fe4f6e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:31:40.855671 containerd[1469]: time="2025-09-12T17:31:40.855504294Z" level=info msg="CreateContainer within sandbox \"bba91ea513fb93bdcc355fde9a936c3e209fe3135b3e58611a0c058032fe4f6e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2d69b26c3beafcda01be4429a1ef73b9e018b547c17368a1da5d13cc1b2bc7a9\"" Sep 12 17:31:40.855804 kubelet[2562]: E0912 17:31:40.855770 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:40.857144 containerd[1469]: time="2025-09-12T17:31:40.856595263Z" level=info msg="StartContainer for \"2d69b26c3beafcda01be4429a1ef73b9e018b547c17368a1da5d13cc1b2bc7a9\"" Sep 12 17:31:40.859008 containerd[1469]: time="2025-09-12T17:31:40.857195804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:40.859008 containerd[1469]: time="2025-09-12T17:31:40.857268162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:40.859008 containerd[1469]: time="2025-09-12T17:31:40.857293340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:40.859008 containerd[1469]: time="2025-09-12T17:31:40.857424222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:40.887375 systemd[1]: Started cri-containerd-fc3ec6a71837e2540e2025645ecefd4d0c0786b80908567133fb6738fc6ef871.scope - libcontainer container fc3ec6a71837e2540e2025645ecefd4d0c0786b80908567133fb6738fc6ef871. Sep 12 17:31:40.893782 systemd[1]: Started cri-containerd-2d69b26c3beafcda01be4429a1ef73b9e018b547c17368a1da5d13cc1b2bc7a9.scope - libcontainer container 2d69b26c3beafcda01be4429a1ef73b9e018b547c17368a1da5d13cc1b2bc7a9. Sep 12 17:31:40.917990 kubelet[2562]: E0912 17:31:40.917953 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:40.945953 containerd[1469]: time="2025-09-12T17:31:40.945907203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6dqmv,Uid:12725823-4303-492d-ad04-9a406011c76e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fc3ec6a71837e2540e2025645ecefd4d0c0786b80908567133fb6738fc6ef871\"" Sep 12 17:31:40.950857 containerd[1469]: time="2025-09-12T17:31:40.950824264Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:31:40.951413 containerd[1469]: time="2025-09-12T17:31:40.951378646Z" level=info msg="StartContainer for \"2d69b26c3beafcda01be4429a1ef73b9e018b547c17368a1da5d13cc1b2bc7a9\" returns successfully" Sep 12 17:31:41.919865 kubelet[2562]: E0912 17:31:41.919811 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:41.921945 kubelet[2562]: E0912 17:31:41.921689 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:41.929128 kubelet[2562]: I0912 17:31:41.929041 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-679mc" podStartSLOduration=1.929015148 podStartE2EDuration="1.929015148s" podCreationTimestamp="2025-09-12 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:31:41.928891842 +0000 UTC m=+8.143179475" watchObservedRunningTime="2025-09-12 17:31:41.929015148 +0000 UTC m=+8.143302781" Sep 12 17:31:42.922709 kubelet[2562]: E0912 17:31:42.922661 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:43.903097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913100334.mount: Deactivated successfully. Sep 12 17:31:44.282352 containerd[1469]: time="2025-09-12T17:31:44.282293215Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:44.283119 containerd[1469]: time="2025-09-12T17:31:44.283052482Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:31:44.284365 containerd[1469]: time="2025-09-12T17:31:44.284291344Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:44.286564 containerd[1469]: time="2025-09-12T17:31:44.286527878Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:44.287675 containerd[1469]: time="2025-09-12T17:31:44.287629809Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.336766431s" Sep 12 17:31:44.287723 containerd[1469]: time="2025-09-12T17:31:44.287670558Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:31:44.308328 containerd[1469]: time="2025-09-12T17:31:44.308301125Z" level=info msg="CreateContainer within sandbox \"fc3ec6a71837e2540e2025645ecefd4d0c0786b80908567133fb6738fc6ef871\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:31:44.477242 containerd[1469]: time="2025-09-12T17:31:44.477161095Z" level=info msg="CreateContainer within sandbox \"fc3ec6a71837e2540e2025645ecefd4d0c0786b80908567133fb6738fc6ef871\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"36ab0ce1e726555d7ba91ffc99dbcb66141a7af278ec63dbfad8ca5aabcfa693\"" Sep 12 17:31:44.477686 containerd[1469]: time="2025-09-12T17:31:44.477638866Z" level=info msg="StartContainer for \"36ab0ce1e726555d7ba91ffc99dbcb66141a7af278ec63dbfad8ca5aabcfa693\"" Sep 12 17:31:44.508345 systemd[1]: Started cri-containerd-36ab0ce1e726555d7ba91ffc99dbcb66141a7af278ec63dbfad8ca5aabcfa693.scope - libcontainer container 36ab0ce1e726555d7ba91ffc99dbcb66141a7af278ec63dbfad8ca5aabcfa693. Sep 12 17:31:44.538435 containerd[1469]: time="2025-09-12T17:31:44.538304049Z" level=info msg="StartContainer for \"36ab0ce1e726555d7ba91ffc99dbcb66141a7af278ec63dbfad8ca5aabcfa693\" returns successfully" Sep 12 17:31:44.935990 kubelet[2562]: I0912 17:31:44.935823 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-6dqmv" podStartSLOduration=1.597812483 podStartE2EDuration="4.935807742s" podCreationTimestamp="2025-09-12 17:31:40 +0000 UTC" firstStartedPulling="2025-09-12 17:31:40.950480234 +0000 UTC m=+7.164767867" lastFinishedPulling="2025-09-12 17:31:44.288475492 +0000 UTC m=+10.502763126" observedRunningTime="2025-09-12 17:31:44.935801851 +0000 UTC m=+11.150089484" watchObservedRunningTime="2025-09-12 17:31:44.935807742 +0000 UTC m=+11.150095365" Sep 12 17:31:47.769089 kubelet[2562]: E0912 17:31:47.769022 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:47.934882 kubelet[2562]: E0912 17:31:47.934817 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:47.991568 kubelet[2562]: E0912 17:31:47.991490 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:50.314331 sudo[1674]: pam_unix(sudo:session): session closed for user root Sep 12 17:31:50.319534 sshd[1661]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:50.325290 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:34076.service: Deactivated successfully. Sep 12 17:31:50.330198 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:31:50.330838 systemd[1]: session-9.scope: Consumed 5.088s CPU time, 161.1M memory peak, 0B memory swap peak. Sep 12 17:31:50.332351 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:31:50.333666 systemd-logind[1448]: Removed session 9. Sep 12 17:31:53.744034 systemd[1]: Created slice kubepods-besteffort-podeb95f7c7_bff2_4df1_8815_21481962e466.slice - libcontainer container kubepods-besteffort-podeb95f7c7_bff2_4df1_8815_21481962e466.slice. Sep 12 17:31:53.806246 kubelet[2562]: I0912 17:31:53.806109 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgq4t\" (UniqueName: \"kubernetes.io/projected/eb95f7c7-bff2-4df1-8815-21481962e466-kube-api-access-xgq4t\") pod \"calico-typha-7d988cf888-t26qn\" (UID: \"eb95f7c7-bff2-4df1-8815-21481962e466\") " pod="calico-system/calico-typha-7d988cf888-t26qn" Sep 12 17:31:53.806246 kubelet[2562]: I0912 17:31:53.806206 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eb95f7c7-bff2-4df1-8815-21481962e466-typha-certs\") pod \"calico-typha-7d988cf888-t26qn\" (UID: \"eb95f7c7-bff2-4df1-8815-21481962e466\") " pod="calico-system/calico-typha-7d988cf888-t26qn" Sep 12 17:31:53.806773 kubelet[2562]: I0912 17:31:53.806267 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb95f7c7-bff2-4df1-8815-21481962e466-tigera-ca-bundle\") pod \"calico-typha-7d988cf888-t26qn\" (UID: \"eb95f7c7-bff2-4df1-8815-21481962e466\") " pod="calico-system/calico-typha-7d988cf888-t26qn" Sep 12 17:31:53.818513 systemd[1]: Created slice kubepods-besteffort-pod5c5723df_c6d9_461a_830c_0293d079a933.slice - libcontainer container kubepods-besteffort-pod5c5723df_c6d9_461a_830c_0293d079a933.slice. Sep 12 17:31:53.906536 kubelet[2562]: I0912 17:31:53.906473 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5c5723df-c6d9-461a-830c-0293d079a933-policysync\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.909902 kubelet[2562]: I0912 17:31:53.909399 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5c5723df-c6d9-461a-830c-0293d079a933-cni-log-dir\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.909902 kubelet[2562]: I0912 17:31:53.909635 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5c5723df-c6d9-461a-830c-0293d079a933-flexvol-driver-host\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.909902 kubelet[2562]: I0912 17:31:53.909665 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5c5723df-c6d9-461a-830c-0293d079a933-node-certs\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.909902 kubelet[2562]: I0912 17:31:53.909683 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c5723df-c6d9-461a-830c-0293d079a933-lib-modules\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.909902 kubelet[2562]: I0912 17:31:53.909698 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5c5723df-c6d9-461a-830c-0293d079a933-var-run-calico\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.910113 kubelet[2562]: I0912 17:31:53.909713 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c5723df-c6d9-461a-830c-0293d079a933-xtables-lock\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.910113 kubelet[2562]: I0912 17:31:53.909730 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6nzv\" (UniqueName: \"kubernetes.io/projected/5c5723df-c6d9-461a-830c-0293d079a933-kube-api-access-l6nzv\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.910113 kubelet[2562]: I0912 17:31:53.909746 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5c5723df-c6d9-461a-830c-0293d079a933-cni-bin-dir\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.910113 kubelet[2562]: I0912 17:31:53.909764 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5c5723df-c6d9-461a-830c-0293d079a933-cni-net-dir\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.910586 kubelet[2562]: I0912 17:31:53.910517 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5723df-c6d9-461a-830c-0293d079a933-tigera-ca-bundle\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.910586 kubelet[2562]: I0912 17:31:53.910563 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5c5723df-c6d9-461a-830c-0293d079a933-var-lib-calico\") pod \"calico-node-mt456\" (UID: \"5c5723df-c6d9-461a-830c-0293d079a933\") " pod="calico-system/calico-node-mt456" Sep 12 17:31:53.935599 kubelet[2562]: E0912 17:31:53.935533 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbdtc" podUID="d8b105ba-edcc-41c9-a17f-5d76bf2daf67" Sep 12 17:31:54.011302 kubelet[2562]: I0912 17:31:54.010891 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d4wp\" (UniqueName: \"kubernetes.io/projected/d8b105ba-edcc-41c9-a17f-5d76bf2daf67-kube-api-access-4d4wp\") pod \"csi-node-driver-mbdtc\" (UID: \"d8b105ba-edcc-41c9-a17f-5d76bf2daf67\") " pod="calico-system/csi-node-driver-mbdtc" Sep 12 17:31:54.011302 kubelet[2562]: I0912 17:31:54.010976 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d8b105ba-edcc-41c9-a17f-5d76bf2daf67-kubelet-dir\") pod \"csi-node-driver-mbdtc\" (UID: \"d8b105ba-edcc-41c9-a17f-5d76bf2daf67\") " pod="calico-system/csi-node-driver-mbdtc" Sep 12 17:31:54.011302 kubelet[2562]: I0912 17:31:54.011011 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d8b105ba-edcc-41c9-a17f-5d76bf2daf67-registration-dir\") pod \"csi-node-driver-mbdtc\" (UID: \"d8b105ba-edcc-41c9-a17f-5d76bf2daf67\") " pod="calico-system/csi-node-driver-mbdtc" Sep 12 17:31:54.011599 kubelet[2562]: I0912 17:31:54.011451 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d8b105ba-edcc-41c9-a17f-5d76bf2daf67-socket-dir\") pod \"csi-node-driver-mbdtc\" (UID: \"d8b105ba-edcc-41c9-a17f-5d76bf2daf67\") " pod="calico-system/csi-node-driver-mbdtc" Sep 12 17:31:54.011599 kubelet[2562]: I0912 17:31:54.011496 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d8b105ba-edcc-41c9-a17f-5d76bf2daf67-varrun\") pod \"csi-node-driver-mbdtc\" (UID: \"d8b105ba-edcc-41c9-a17f-5d76bf2daf67\") " pod="calico-system/csi-node-driver-mbdtc" Sep 12 17:31:54.046667 kubelet[2562]: E0912 17:31:54.046631 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.046667 kubelet[2562]: W0912 17:31:54.046656 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.046844 kubelet[2562]: E0912 17:31:54.046686 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.047013 kubelet[2562]: E0912 17:31:54.046993 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.047013 kubelet[2562]: W0912 17:31:54.047006 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.047070 kubelet[2562]: E0912 17:31:54.047015 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.047274 kubelet[2562]: E0912 17:31:54.047258 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.047274 kubelet[2562]: W0912 17:31:54.047272 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.047349 kubelet[2562]: E0912 17:31:54.047282 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.050817 kubelet[2562]: E0912 17:31:54.050785 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:54.051530 containerd[1469]: time="2025-09-12T17:31:54.051466781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d988cf888-t26qn,Uid:eb95f7c7-bff2-4df1-8815-21481962e466,Namespace:calico-system,Attempt:0,}" Sep 12 17:31:54.112800 kubelet[2562]: E0912 17:31:54.112770 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.112800 kubelet[2562]: W0912 17:31:54.112791 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.112896 kubelet[2562]: E0912 17:31:54.112819 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.113127 kubelet[2562]: E0912 17:31:54.113096 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.113127 kubelet[2562]: W0912 17:31:54.113113 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.113127 kubelet[2562]: E0912 17:31:54.113122 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.113378 kubelet[2562]: E0912 17:31:54.113363 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.113378 kubelet[2562]: W0912 17:31:54.113374 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.113437 kubelet[2562]: E0912 17:31:54.113384 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.113622 kubelet[2562]: E0912 17:31:54.113608 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.113622 kubelet[2562]: W0912 17:31:54.113618 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.113670 kubelet[2562]: E0912 17:31:54.113627 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.113863 kubelet[2562]: E0912 17:31:54.113847 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.113863 kubelet[2562]: W0912 17:31:54.113857 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.113918 kubelet[2562]: E0912 17:31:54.113866 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.114290 kubelet[2562]: E0912 17:31:54.114266 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.114332 kubelet[2562]: W0912 17:31:54.114290 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.114332 kubelet[2562]: E0912 17:31:54.114310 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.114576 kubelet[2562]: E0912 17:31:54.114552 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.114576 kubelet[2562]: W0912 17:31:54.114567 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.114624 kubelet[2562]: E0912 17:31:54.114577 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.114822 kubelet[2562]: E0912 17:31:54.114806 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.114822 kubelet[2562]: W0912 17:31:54.114819 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.114871 kubelet[2562]: E0912 17:31:54.114829 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.115052 kubelet[2562]: E0912 17:31:54.115037 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.115052 kubelet[2562]: W0912 17:31:54.115049 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.115104 kubelet[2562]: E0912 17:31:54.115059 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.115311 kubelet[2562]: E0912 17:31:54.115296 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.115311 kubelet[2562]: W0912 17:31:54.115307 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.115362 kubelet[2562]: E0912 17:31:54.115317 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.115574 kubelet[2562]: E0912 17:31:54.115551 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.115574 kubelet[2562]: W0912 17:31:54.115564 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.115625 kubelet[2562]: E0912 17:31:54.115573 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.115798 kubelet[2562]: E0912 17:31:54.115784 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.115798 kubelet[2562]: W0912 17:31:54.115795 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.115854 kubelet[2562]: E0912 17:31:54.115804 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.116074 kubelet[2562]: E0912 17:31:54.116060 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.116074 kubelet[2562]: W0912 17:31:54.116071 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.116139 kubelet[2562]: E0912 17:31:54.116080 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.116345 kubelet[2562]: E0912 17:31:54.116330 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.116345 kubelet[2562]: W0912 17:31:54.116342 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.116409 kubelet[2562]: E0912 17:31:54.116351 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.116636 kubelet[2562]: E0912 17:31:54.116619 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.116636 kubelet[2562]: W0912 17:31:54.116632 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.116758 kubelet[2562]: E0912 17:31:54.116643 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.116891 kubelet[2562]: E0912 17:31:54.116875 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.116891 kubelet[2562]: W0912 17:31:54.116887 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.116948 kubelet[2562]: E0912 17:31:54.116897 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.117161 kubelet[2562]: E0912 17:31:54.117145 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.117161 kubelet[2562]: W0912 17:31:54.117157 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.117231 kubelet[2562]: E0912 17:31:54.117168 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.117413 kubelet[2562]: E0912 17:31:54.117398 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.117413 kubelet[2562]: W0912 17:31:54.117411 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.117472 kubelet[2562]: E0912 17:31:54.117420 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.117726 kubelet[2562]: E0912 17:31:54.117707 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.117760 kubelet[2562]: W0912 17:31:54.117725 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.117760 kubelet[2562]: E0912 17:31:54.117739 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.118094 kubelet[2562]: E0912 17:31:54.118077 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.118094 kubelet[2562]: W0912 17:31:54.118091 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.118158 kubelet[2562]: E0912 17:31:54.118102 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.118397 kubelet[2562]: E0912 17:31:54.118381 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.118397 kubelet[2562]: W0912 17:31:54.118393 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.118445 kubelet[2562]: E0912 17:31:54.118402 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.118719 kubelet[2562]: E0912 17:31:54.118683 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.118719 kubelet[2562]: W0912 17:31:54.118703 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.118719 kubelet[2562]: E0912 17:31:54.118725 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.119053 kubelet[2562]: E0912 17:31:54.119031 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.119053 kubelet[2562]: W0912 17:31:54.119048 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.119140 kubelet[2562]: E0912 17:31:54.119060 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.119353 kubelet[2562]: E0912 17:31:54.119335 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.119353 kubelet[2562]: W0912 17:31:54.119346 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.119433 kubelet[2562]: E0912 17:31:54.119356 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.119692 kubelet[2562]: E0912 17:31:54.119640 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.119692 kubelet[2562]: W0912 17:31:54.119653 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.119692 kubelet[2562]: E0912 17:31:54.119662 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.123080 containerd[1469]: time="2025-09-12T17:31:54.123043821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mt456,Uid:5c5723df-c6d9-461a-830c-0293d079a933,Namespace:calico-system,Attempt:0,}" Sep 12 17:31:54.223905 kubelet[2562]: E0912 17:31:54.223859 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:31:54.223905 kubelet[2562]: W0912 17:31:54.223886 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:31:54.224052 kubelet[2562]: E0912 17:31:54.223932 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:31:54.285964 containerd[1469]: time="2025-09-12T17:31:54.284565089Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:54.285964 containerd[1469]: time="2025-09-12T17:31:54.285816325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:54.285964 containerd[1469]: time="2025-09-12T17:31:54.285829441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:54.286412 containerd[1469]: time="2025-09-12T17:31:54.285941077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:54.287306 containerd[1469]: time="2025-09-12T17:31:54.284173901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:31:54.287520 containerd[1469]: time="2025-09-12T17:31:54.287379796Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:31:54.287928 containerd[1469]: time="2025-09-12T17:31:54.287611286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:54.289785 containerd[1469]: time="2025-09-12T17:31:54.288668816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:31:54.305995 systemd[1]: Started cri-containerd-2ea2323c33b76534d439e7e880f37fcf7c93d5520aa19f04ce3f73a30b9435bc.scope - libcontainer container 2ea2323c33b76534d439e7e880f37fcf7c93d5520aa19f04ce3f73a30b9435bc. Sep 12 17:31:54.310584 systemd[1]: Started cri-containerd-a503d790ce4f22f84fc4155800142218a5cdae221af80ace40947aef420c5c95.scope - libcontainer container a503d790ce4f22f84fc4155800142218a5cdae221af80ace40947aef420c5c95. Sep 12 17:31:54.344619 containerd[1469]: time="2025-09-12T17:31:54.344563108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mt456,Uid:5c5723df-c6d9-461a-830c-0293d079a933,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ea2323c33b76534d439e7e880f37fcf7c93d5520aa19f04ce3f73a30b9435bc\"" Sep 12 17:31:54.349355 containerd[1469]: time="2025-09-12T17:31:54.348586999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:31:54.353098 containerd[1469]: time="2025-09-12T17:31:54.353045353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7d988cf888-t26qn,Uid:eb95f7c7-bff2-4df1-8815-21481962e466,Namespace:calico-system,Attempt:0,} returns sandbox id \"a503d790ce4f22f84fc4155800142218a5cdae221af80ace40947aef420c5c95\"" Sep 12 17:31:54.353760 kubelet[2562]: E0912 17:31:54.353727 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:55.892323 kubelet[2562]: E0912 17:31:55.892254 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbdtc" podUID="d8b105ba-edcc-41c9-a17f-5d76bf2daf67" Sep 12 17:31:56.233699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3758718744.mount: Deactivated successfully. Sep 12 17:31:56.307559 containerd[1469]: time="2025-09-12T17:31:56.307513257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:56.310364 containerd[1469]: time="2025-09-12T17:31:56.310311002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 12 17:31:56.311506 containerd[1469]: time="2025-09-12T17:31:56.311482930Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:56.313813 containerd[1469]: time="2025-09-12T17:31:56.313783532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:56.314635 containerd[1469]: time="2025-09-12T17:31:56.314400216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.965772408s" Sep 12 17:31:56.314635 containerd[1469]: time="2025-09-12T17:31:56.314451205Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:31:56.316080 containerd[1469]: time="2025-09-12T17:31:56.316056592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:31:56.319757 containerd[1469]: time="2025-09-12T17:31:56.319703721Z" level=info msg="CreateContainer within sandbox \"2ea2323c33b76534d439e7e880f37fcf7c93d5520aa19f04ce3f73a30b9435bc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:31:56.340137 containerd[1469]: time="2025-09-12T17:31:56.340095658Z" level=info msg="CreateContainer within sandbox \"2ea2323c33b76534d439e7e880f37fcf7c93d5520aa19f04ce3f73a30b9435bc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a6960e6e5a5542b628a33478e8889a44c1c01510c3a22072eccccd8f19a5a563\"" Sep 12 17:31:56.340672 containerd[1469]: time="2025-09-12T17:31:56.340599794Z" level=info msg="StartContainer for \"a6960e6e5a5542b628a33478e8889a44c1c01510c3a22072eccccd8f19a5a563\"" Sep 12 17:31:56.381373 systemd[1]: Started cri-containerd-a6960e6e5a5542b628a33478e8889a44c1c01510c3a22072eccccd8f19a5a563.scope - libcontainer container a6960e6e5a5542b628a33478e8889a44c1c01510c3a22072eccccd8f19a5a563. Sep 12 17:31:56.412370 containerd[1469]: time="2025-09-12T17:31:56.412328573Z" level=info msg="StartContainer for \"a6960e6e5a5542b628a33478e8889a44c1c01510c3a22072eccccd8f19a5a563\" returns successfully" Sep 12 17:31:56.425560 systemd[1]: cri-containerd-a6960e6e5a5542b628a33478e8889a44c1c01510c3a22072eccccd8f19a5a563.scope: Deactivated successfully. Sep 12 17:31:56.535575 containerd[1469]: time="2025-09-12T17:31:56.535420631Z" level=info msg="shim disconnected" id=a6960e6e5a5542b628a33478e8889a44c1c01510c3a22072eccccd8f19a5a563 namespace=k8s.io Sep 12 17:31:56.535575 containerd[1469]: time="2025-09-12T17:31:56.535479416Z" level=warning msg="cleaning up after shim disconnected" id=a6960e6e5a5542b628a33478e8889a44c1c01510c3a22072eccccd8f19a5a563 namespace=k8s.io Sep 12 17:31:56.535575 containerd[1469]: time="2025-09-12T17:31:56.535491018Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:31:57.212911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6960e6e5a5542b628a33478e8889a44c1c01510c3a22072eccccd8f19a5a563-rootfs.mount: Deactivated successfully. Sep 12 17:31:57.892786 kubelet[2562]: E0912 17:31:57.892690 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbdtc" podUID="d8b105ba-edcc-41c9-a17f-5d76bf2daf67" Sep 12 17:31:59.805584 containerd[1469]: time="2025-09-12T17:31:59.805510047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:59.806523 containerd[1469]: time="2025-09-12T17:31:59.806453239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 12 17:31:59.807866 containerd[1469]: time="2025-09-12T17:31:59.807835477Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:59.811326 containerd[1469]: time="2025-09-12T17:31:59.811277974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:59.811893 containerd[1469]: time="2025-09-12T17:31:59.811863224Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.495778849s" Sep 12 17:31:59.811932 containerd[1469]: time="2025-09-12T17:31:59.811899684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:31:59.812997 containerd[1469]: time="2025-09-12T17:31:59.812975300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:31:59.829500 containerd[1469]: time="2025-09-12T17:31:59.829441843Z" level=info msg="CreateContainer within sandbox \"a503d790ce4f22f84fc4155800142218a5cdae221af80ace40947aef420c5c95\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:31:59.848891 containerd[1469]: time="2025-09-12T17:31:59.848842220Z" level=info msg="CreateContainer within sandbox \"a503d790ce4f22f84fc4155800142218a5cdae221af80ace40947aef420c5c95\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"efc67abdee499d42337e347ea0495ee5ffc37639943c3ab737828b87abdcf1f8\"" Sep 12 17:31:59.849302 containerd[1469]: time="2025-09-12T17:31:59.849280085Z" level=info msg="StartContainer for \"efc67abdee499d42337e347ea0495ee5ffc37639943c3ab737828b87abdcf1f8\"" Sep 12 17:31:59.892387 kubelet[2562]: E0912 17:31:59.892320 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbdtc" podUID="d8b105ba-edcc-41c9-a17f-5d76bf2daf67" Sep 12 17:31:59.897446 systemd[1]: Started cri-containerd-efc67abdee499d42337e347ea0495ee5ffc37639943c3ab737828b87abdcf1f8.scope - libcontainer container efc67abdee499d42337e347ea0495ee5ffc37639943c3ab737828b87abdcf1f8. Sep 12 17:31:59.942868 containerd[1469]: time="2025-09-12T17:31:59.942820175Z" level=info msg="StartContainer for \"efc67abdee499d42337e347ea0495ee5ffc37639943c3ab737828b87abdcf1f8\" returns successfully" Sep 12 17:31:59.966024 kubelet[2562]: E0912 17:31:59.965970 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:31:59.979419 kubelet[2562]: I0912 17:31:59.979352 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7d988cf888-t26qn" podStartSLOduration=1.5215212 podStartE2EDuration="6.979333352s" podCreationTimestamp="2025-09-12 17:31:53 +0000 UTC" firstStartedPulling="2025-09-12 17:31:54.355015904 +0000 UTC m=+20.569303537" lastFinishedPulling="2025-09-12 17:31:59.812828046 +0000 UTC m=+26.027115689" observedRunningTime="2025-09-12 17:31:59.978941475 +0000 UTC m=+26.193229098" watchObservedRunningTime="2025-09-12 17:31:59.979333352 +0000 UTC m=+26.193620985" Sep 12 17:32:00.970142 kubelet[2562]: I0912 17:32:00.970067 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:32:00.971346 kubelet[2562]: E0912 17:32:00.970701 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:01.892597 kubelet[2562]: E0912 17:32:01.892506 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbdtc" podUID="d8b105ba-edcc-41c9-a17f-5d76bf2daf67" Sep 12 17:32:03.895139 kubelet[2562]: E0912 17:32:03.895075 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbdtc" podUID="d8b105ba-edcc-41c9-a17f-5d76bf2daf67" Sep 12 17:32:04.932935 containerd[1469]: time="2025-09-12T17:32:04.932869709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:04.956630 containerd[1469]: time="2025-09-12T17:32:04.956534484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:32:04.983756 containerd[1469]: time="2025-09-12T17:32:04.983693302Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:05.016123 containerd[1469]: time="2025-09-12T17:32:05.016078765Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:05.017005 containerd[1469]: time="2025-09-12T17:32:05.016972884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.203968206s" Sep 12 17:32:05.017104 containerd[1469]: time="2025-09-12T17:32:05.017007349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:32:05.050804 containerd[1469]: time="2025-09-12T17:32:05.050708660Z" level=info msg="CreateContainer within sandbox \"2ea2323c33b76534d439e7e880f37fcf7c93d5520aa19f04ce3f73a30b9435bc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:32:05.297615 containerd[1469]: time="2025-09-12T17:32:05.297536188Z" level=info msg="CreateContainer within sandbox \"2ea2323c33b76534d439e7e880f37fcf7c93d5520aa19f04ce3f73a30b9435bc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1d6d0fc8030ff506820b23a3fb260bf2ef8da69b07131105996af317f5923bd8\"" Sep 12 17:32:05.298307 containerd[1469]: time="2025-09-12T17:32:05.298241815Z" level=info msg="StartContainer for \"1d6d0fc8030ff506820b23a3fb260bf2ef8da69b07131105996af317f5923bd8\"" Sep 12 17:32:05.332351 systemd[1]: Started cri-containerd-1d6d0fc8030ff506820b23a3fb260bf2ef8da69b07131105996af317f5923bd8.scope - libcontainer container 1d6d0fc8030ff506820b23a3fb260bf2ef8da69b07131105996af317f5923bd8. Sep 12 17:32:05.623660 containerd[1469]: time="2025-09-12T17:32:05.623399302Z" level=info msg="StartContainer for \"1d6d0fc8030ff506820b23a3fb260bf2ef8da69b07131105996af317f5923bd8\" returns successfully" Sep 12 17:32:05.912404 kubelet[2562]: E0912 17:32:05.912229 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mbdtc" podUID="d8b105ba-edcc-41c9-a17f-5d76bf2daf67" Sep 12 17:32:07.444957 systemd[1]: cri-containerd-1d6d0fc8030ff506820b23a3fb260bf2ef8da69b07131105996af317f5923bd8.scope: Deactivated successfully. Sep 12 17:32:07.467423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d6d0fc8030ff506820b23a3fb260bf2ef8da69b07131105996af317f5923bd8-rootfs.mount: Deactivated successfully. Sep 12 17:32:07.506551 kubelet[2562]: I0912 17:32:07.506485 2562 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:32:08.050521 containerd[1469]: time="2025-09-12T17:32:08.050442949Z" level=info msg="shim disconnected" id=1d6d0fc8030ff506820b23a3fb260bf2ef8da69b07131105996af317f5923bd8 namespace=k8s.io Sep 12 17:32:08.050521 containerd[1469]: time="2025-09-12T17:32:08.050507843Z" level=warning msg="cleaning up after shim disconnected" id=1d6d0fc8030ff506820b23a3fb260bf2ef8da69b07131105996af317f5923bd8 namespace=k8s.io Sep 12 17:32:08.050521 containerd[1469]: time="2025-09-12T17:32:08.050520948Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:32:08.063030 systemd[1]: Created slice kubepods-burstable-pod6f401e60_a51f_4ed0_8199_7c39a5b7cb6f.slice - libcontainer container kubepods-burstable-pod6f401e60_a51f_4ed0_8199_7c39a5b7cb6f.slice. Sep 12 17:32:08.076556 systemd[1]: Created slice kubepods-besteffort-podd8b105ba_edcc_41c9_a17f_5d76bf2daf67.slice - libcontainer container kubepods-besteffort-podd8b105ba_edcc_41c9_a17f_5d76bf2daf67.slice. Sep 12 17:32:08.079375 containerd[1469]: time="2025-09-12T17:32:08.079335993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mbdtc,Uid:d8b105ba-edcc-41c9-a17f-5d76bf2daf67,Namespace:calico-system,Attempt:0,}" Sep 12 17:32:08.115822 kubelet[2562]: I0912 17:32:08.115758 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da63dc76-0ae4-4dcd-9e39-e6b5230d815d-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-s5glj\" (UID: \"da63dc76-0ae4-4dcd-9e39-e6b5230d815d\") " pod="calico-system/goldmane-54d579b49d-s5glj" Sep 12 17:32:08.115822 kubelet[2562]: I0912 17:32:08.115804 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wws7r\" (UniqueName: \"kubernetes.io/projected/da63dc76-0ae4-4dcd-9e39-e6b5230d815d-kube-api-access-wws7r\") pod \"goldmane-54d579b49d-s5glj\" (UID: \"da63dc76-0ae4-4dcd-9e39-e6b5230d815d\") " pod="calico-system/goldmane-54d579b49d-s5glj" Sep 12 17:32:08.116013 kubelet[2562]: I0912 17:32:08.115852 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/da63dc76-0ae4-4dcd-9e39-e6b5230d815d-goldmane-key-pair\") pod \"goldmane-54d579b49d-s5glj\" (UID: \"da63dc76-0ae4-4dcd-9e39-e6b5230d815d\") " pod="calico-system/goldmane-54d579b49d-s5glj" Sep 12 17:32:08.116013 kubelet[2562]: I0912 17:32:08.115909 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts92q\" (UniqueName: \"kubernetes.io/projected/6f401e60-a51f-4ed0-8199-7c39a5b7cb6f-kube-api-access-ts92q\") pod \"coredns-674b8bbfcf-6bfmb\" (UID: \"6f401e60-a51f-4ed0-8199-7c39a5b7cb6f\") " pod="kube-system/coredns-674b8bbfcf-6bfmb" Sep 12 17:32:08.116013 kubelet[2562]: I0912 17:32:08.115933 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da63dc76-0ae4-4dcd-9e39-e6b5230d815d-config\") pod \"goldmane-54d579b49d-s5glj\" (UID: \"da63dc76-0ae4-4dcd-9e39-e6b5230d815d\") " pod="calico-system/goldmane-54d579b49d-s5glj" Sep 12 17:32:08.116013 kubelet[2562]: I0912 17:32:08.115953 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f401e60-a51f-4ed0-8199-7c39a5b7cb6f-config-volume\") pod \"coredns-674b8bbfcf-6bfmb\" (UID: \"6f401e60-a51f-4ed0-8199-7c39a5b7cb6f\") " pod="kube-system/coredns-674b8bbfcf-6bfmb" Sep 12 17:32:08.159334 systemd[1]: Created slice kubepods-besteffort-podda63dc76_0ae4_4dcd_9e39_e6b5230d815d.slice - libcontainer container kubepods-besteffort-podda63dc76_0ae4_4dcd_9e39_e6b5230d815d.slice. Sep 12 17:32:08.163432 systemd[1]: Created slice kubepods-burstable-pod2fbe993a_426d_4181_874c_464b718119c8.slice - libcontainer container kubepods-burstable-pod2fbe993a_426d_4181_874c_464b718119c8.slice. Sep 12 17:32:08.216399 kubelet[2562]: I0912 17:32:08.216335 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g9rz\" (UniqueName: \"kubernetes.io/projected/2fbe993a-426d-4181-874c-464b718119c8-kube-api-access-9g9rz\") pod \"coredns-674b8bbfcf-mp9m5\" (UID: \"2fbe993a-426d-4181-874c-464b718119c8\") " pod="kube-system/coredns-674b8bbfcf-mp9m5" Sep 12 17:32:08.216570 kubelet[2562]: I0912 17:32:08.216415 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2fbe993a-426d-4181-874c-464b718119c8-config-volume\") pod \"coredns-674b8bbfcf-mp9m5\" (UID: \"2fbe993a-426d-4181-874c-464b718119c8\") " pod="kube-system/coredns-674b8bbfcf-mp9m5" Sep 12 17:32:08.229144 systemd[1]: Created slice kubepods-besteffort-poddbb9727f_81ce_4dc4_900b_5e7086236c76.slice - libcontainer container kubepods-besteffort-poddbb9727f_81ce_4dc4_900b_5e7086236c76.slice. Sep 12 17:32:08.290565 containerd[1469]: time="2025-09-12T17:32:08.290486621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:32:08.297017 systemd[1]: Created slice kubepods-besteffort-podf0219ed1_d2e0_4c42_9b74_ef9a21b8a523.slice - libcontainer container kubepods-besteffort-podf0219ed1_d2e0_4c42_9b74_ef9a21b8a523.slice. Sep 12 17:32:08.317515 kubelet[2562]: I0912 17:32:08.317346 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctxlq\" (UniqueName: \"kubernetes.io/projected/dbb9727f-81ce-4dc4-900b-5e7086236c76-kube-api-access-ctxlq\") pod \"calico-apiserver-797c87987f-628v2\" (UID: \"dbb9727f-81ce-4dc4-900b-5e7086236c76\") " pod="calico-apiserver/calico-apiserver-797c87987f-628v2" Sep 12 17:32:08.317515 kubelet[2562]: I0912 17:32:08.317395 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f0219ed1-d2e0-4c42-9b74-ef9a21b8a523-tigera-ca-bundle\") pod \"calico-kube-controllers-7d58b7c7df-c2h2g\" (UID: \"f0219ed1-d2e0-4c42-9b74-ef9a21b8a523\") " pod="calico-system/calico-kube-controllers-7d58b7c7df-c2h2g" Sep 12 17:32:08.317515 kubelet[2562]: I0912 17:32:08.317421 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbb9727f-81ce-4dc4-900b-5e7086236c76-calico-apiserver-certs\") pod \"calico-apiserver-797c87987f-628v2\" (UID: \"dbb9727f-81ce-4dc4-900b-5e7086236c76\") " pod="calico-apiserver/calico-apiserver-797c87987f-628v2" Sep 12 17:32:08.318643 kubelet[2562]: I0912 17:32:08.318006 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgrbc\" (UniqueName: \"kubernetes.io/projected/f0219ed1-d2e0-4c42-9b74-ef9a21b8a523-kube-api-access-fgrbc\") pod \"calico-kube-controllers-7d58b7c7df-c2h2g\" (UID: \"f0219ed1-d2e0-4c42-9b74-ef9a21b8a523\") " pod="calico-system/calico-kube-controllers-7d58b7c7df-c2h2g" Sep 12 17:32:08.331629 systemd[1]: Created slice kubepods-besteffort-podafe9d8f3_858a_48bc_b6b4_9176e5274326.slice - libcontainer container kubepods-besteffort-podafe9d8f3_858a_48bc_b6b4_9176e5274326.slice. Sep 12 17:32:08.367275 kubelet[2562]: E0912 17:32:08.367194 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:08.367866 containerd[1469]: time="2025-09-12T17:32:08.367821211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6bfmb,Uid:6f401e60-a51f-4ed0-8199-7c39a5b7cb6f,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:08.370341 systemd[1]: Created slice kubepods-besteffort-podc62c609a_3cbb_45a5_ba08_4db418faacd8.slice - libcontainer container kubepods-besteffort-podc62c609a_3cbb_45a5_ba08_4db418faacd8.slice. Sep 12 17:32:08.418589 kubelet[2562]: I0912 17:32:08.418534 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzpw5\" (UniqueName: \"kubernetes.io/projected/c62c609a-3cbb-45a5-ba08-4db418faacd8-kube-api-access-wzpw5\") pod \"calico-apiserver-797c87987f-th4cn\" (UID: \"c62c609a-3cbb-45a5-ba08-4db418faacd8\") " pod="calico-apiserver/calico-apiserver-797c87987f-th4cn" Sep 12 17:32:08.418589 kubelet[2562]: I0912 17:32:08.418588 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/afe9d8f3-858a-48bc-b6b4-9176e5274326-whisker-backend-key-pair\") pod \"whisker-9fd4cb64f-4pbh9\" (UID: \"afe9d8f3-858a-48bc-b6b4-9176e5274326\") " pod="calico-system/whisker-9fd4cb64f-4pbh9" Sep 12 17:32:08.418804 kubelet[2562]: I0912 17:32:08.418627 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c62c609a-3cbb-45a5-ba08-4db418faacd8-calico-apiserver-certs\") pod \"calico-apiserver-797c87987f-th4cn\" (UID: \"c62c609a-3cbb-45a5-ba08-4db418faacd8\") " pod="calico-apiserver/calico-apiserver-797c87987f-th4cn" Sep 12 17:32:08.418804 kubelet[2562]: I0912 17:32:08.418653 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj4g5\" (UniqueName: \"kubernetes.io/projected/afe9d8f3-858a-48bc-b6b4-9176e5274326-kube-api-access-nj4g5\") pod \"whisker-9fd4cb64f-4pbh9\" (UID: \"afe9d8f3-858a-48bc-b6b4-9176e5274326\") " pod="calico-system/whisker-9fd4cb64f-4pbh9" Sep 12 17:32:08.418804 kubelet[2562]: I0912 17:32:08.418744 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe9d8f3-858a-48bc-b6b4-9176e5274326-whisker-ca-bundle\") pod \"whisker-9fd4cb64f-4pbh9\" (UID: \"afe9d8f3-858a-48bc-b6b4-9176e5274326\") " pod="calico-system/whisker-9fd4cb64f-4pbh9" Sep 12 17:32:08.463118 containerd[1469]: time="2025-09-12T17:32:08.463048638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-s5glj,Uid:da63dc76-0ae4-4dcd-9e39-e6b5230d815d,Namespace:calico-system,Attempt:0,}" Sep 12 17:32:08.466535 kubelet[2562]: E0912 17:32:08.466321 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:08.468552 containerd[1469]: time="2025-09-12T17:32:08.468488136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mp9m5,Uid:2fbe993a-426d-4181-874c-464b718119c8,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:08.545635 containerd[1469]: time="2025-09-12T17:32:08.544859499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797c87987f-628v2,Uid:dbb9727f-81ce-4dc4-900b-5e7086236c76,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:32:08.586711 containerd[1469]: time="2025-09-12T17:32:08.586552175Z" level=error msg="Failed to destroy network for sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:08.589773 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523-shm.mount: Deactivated successfully. Sep 12 17:32:08.592776 containerd[1469]: time="2025-09-12T17:32:08.590022624Z" level=error msg="encountered an error cleaning up failed sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:08.592841 containerd[1469]: time="2025-09-12T17:32:08.592796468Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mbdtc,Uid:d8b105ba-edcc-41c9-a17f-5d76bf2daf67,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:08.593138 kubelet[2562]: E0912 17:32:08.593071 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:08.593470 kubelet[2562]: E0912 17:32:08.593177 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mbdtc" Sep 12 17:32:08.593470 kubelet[2562]: E0912 17:32:08.593244 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mbdtc" Sep 12 17:32:08.593470 kubelet[2562]: E0912 17:32:08.593317 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mbdtc_calico-system(d8b105ba-edcc-41c9-a17f-5d76bf2daf67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mbdtc_calico-system(d8b105ba-edcc-41c9-a17f-5d76bf2daf67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mbdtc" podUID="d8b105ba-edcc-41c9-a17f-5d76bf2daf67" Sep 12 17:32:08.600853 containerd[1469]: time="2025-09-12T17:32:08.600815636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d58b7c7df-c2h2g,Uid:f0219ed1-d2e0-4c42-9b74-ef9a21b8a523,Namespace:calico-system,Attempt:0,}" Sep 12 17:32:08.634856 containerd[1469]: time="2025-09-12T17:32:08.634824928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9fd4cb64f-4pbh9,Uid:afe9d8f3-858a-48bc-b6b4-9176e5274326,Namespace:calico-system,Attempt:0,}" Sep 12 17:32:08.676877 containerd[1469]: time="2025-09-12T17:32:08.676812831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797c87987f-th4cn,Uid:c62c609a-3cbb-45a5-ba08-4db418faacd8,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:32:09.035199 containerd[1469]: time="2025-09-12T17:32:09.035108734Z" level=error msg="Failed to destroy network for sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.035649 containerd[1469]: time="2025-09-12T17:32:09.035609415Z" level=error msg="encountered an error cleaning up failed sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.035704 containerd[1469]: time="2025-09-12T17:32:09.035675411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6bfmb,Uid:6f401e60-a51f-4ed0-8199-7c39a5b7cb6f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.036034 kubelet[2562]: E0912 17:32:09.035963 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.036034 kubelet[2562]: E0912 17:32:09.036048 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6bfmb" Sep 12 17:32:09.036236 kubelet[2562]: E0912 17:32:09.036074 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6bfmb" Sep 12 17:32:09.036282 kubelet[2562]: E0912 17:32:09.036204 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6bfmb_kube-system(6f401e60-a51f-4ed0-8199-7c39a5b7cb6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6bfmb_kube-system(6f401e60-a51f-4ed0-8199-7c39a5b7cb6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6bfmb" podUID="6f401e60-a51f-4ed0-8199-7c39a5b7cb6f" Sep 12 17:32:09.224595 kubelet[2562]: I0912 17:32:09.224542 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:09.225719 containerd[1469]: time="2025-09-12T17:32:09.225431653Z" level=info msg="StopPodSandbox for \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\"" Sep 12 17:32:09.226345 kubelet[2562]: I0912 17:32:09.225825 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:09.226554 containerd[1469]: time="2025-09-12T17:32:09.226518327Z" level=info msg="StopPodSandbox for \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\"" Sep 12 17:32:09.227140 containerd[1469]: time="2025-09-12T17:32:09.227096385Z" level=info msg="Ensure that sandbox 2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197 in task-service has been cleanup successfully" Sep 12 17:32:09.231435 containerd[1469]: time="2025-09-12T17:32:09.231381063Z" level=info msg="Ensure that sandbox 77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523 in task-service has been cleanup successfully" Sep 12 17:32:09.268035 containerd[1469]: time="2025-09-12T17:32:09.267974342Z" level=error msg="StopPodSandbox for \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\" failed" error="failed to destroy network for sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.268534 kubelet[2562]: E0912 17:32:09.268386 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:09.268534 kubelet[2562]: E0912 17:32:09.268460 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523"} Sep 12 17:32:09.268625 containerd[1469]: time="2025-09-12T17:32:09.268441719Z" level=error msg="StopPodSandbox for \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\" failed" error="failed to destroy network for sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.268673 kubelet[2562]: E0912 17:32:09.268606 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:09.268708 kubelet[2562]: E0912 17:32:09.268677 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197"} Sep 12 17:32:09.268739 kubelet[2562]: E0912 17:32:09.268721 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6f401e60-a51f-4ed0-8199-7c39a5b7cb6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:09.268810 kubelet[2562]: E0912 17:32:09.268752 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6f401e60-a51f-4ed0-8199-7c39a5b7cb6f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6bfmb" podUID="6f401e60-a51f-4ed0-8199-7c39a5b7cb6f" Sep 12 17:32:09.268991 kubelet[2562]: E0912 17:32:09.268896 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8b105ba-edcc-41c9-a17f-5d76bf2daf67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:09.268991 kubelet[2562]: E0912 17:32:09.268945 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8b105ba-edcc-41c9-a17f-5d76bf2daf67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mbdtc" podUID="d8b105ba-edcc-41c9-a17f-5d76bf2daf67" Sep 12 17:32:09.315772 containerd[1469]: time="2025-09-12T17:32:09.315619138Z" level=error msg="Failed to destroy network for sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.316113 containerd[1469]: time="2025-09-12T17:32:09.316078408Z" level=error msg="encountered an error cleaning up failed sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.316180 containerd[1469]: time="2025-09-12T17:32:09.316146869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mp9m5,Uid:2fbe993a-426d-4181-874c-464b718119c8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.316560 kubelet[2562]: E0912 17:32:09.316498 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.316642 kubelet[2562]: E0912 17:32:09.316577 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mp9m5" Sep 12 17:32:09.316642 kubelet[2562]: E0912 17:32:09.316606 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mp9m5" Sep 12 17:32:09.316698 kubelet[2562]: E0912 17:32:09.316663 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mp9m5_kube-system(2fbe993a-426d-4181-874c-464b718119c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mp9m5_kube-system(2fbe993a-426d-4181-874c-464b718119c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mp9m5" podUID="2fbe993a-426d-4181-874c-464b718119c8" Sep 12 17:32:09.430321 containerd[1469]: time="2025-09-12T17:32:09.430251674Z" level=error msg="Failed to destroy network for sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.430817 containerd[1469]: time="2025-09-12T17:32:09.430786650Z" level=error msg="encountered an error cleaning up failed sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.430863 containerd[1469]: time="2025-09-12T17:32:09.430845092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-s5glj,Uid:da63dc76-0ae4-4dcd-9e39-e6b5230d815d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.431178 kubelet[2562]: E0912 17:32:09.431110 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.431421 kubelet[2562]: E0912 17:32:09.431196 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-s5glj" Sep 12 17:32:09.431421 kubelet[2562]: E0912 17:32:09.431240 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-s5glj" Sep 12 17:32:09.431421 kubelet[2562]: E0912 17:32:09.431310 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-s5glj_calico-system(da63dc76-0ae4-4dcd-9e39-e6b5230d815d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-s5glj_calico-system(da63dc76-0ae4-4dcd-9e39-e6b5230d815d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-s5glj" podUID="da63dc76-0ae4-4dcd-9e39-e6b5230d815d" Sep 12 17:32:09.473806 containerd[1469]: time="2025-09-12T17:32:09.473729868Z" level=error msg="Failed to destroy network for sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.474268 containerd[1469]: time="2025-09-12T17:32:09.474184450Z" level=error msg="encountered an error cleaning up failed sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.474268 containerd[1469]: time="2025-09-12T17:32:09.474250927Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797c87987f-628v2,Uid:dbb9727f-81ce-4dc4-900b-5e7086236c76,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.474433 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197-shm.mount: Deactivated successfully. Sep 12 17:32:09.476330 kubelet[2562]: E0912 17:32:09.474676 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.476330 kubelet[2562]: E0912 17:32:09.474755 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797c87987f-628v2" Sep 12 17:32:09.476330 kubelet[2562]: E0912 17:32:09.474779 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797c87987f-628v2" Sep 12 17:32:09.476473 kubelet[2562]: E0912 17:32:09.474848 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-797c87987f-628v2_calico-apiserver(dbb9727f-81ce-4dc4-900b-5e7086236c76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-797c87987f-628v2_calico-apiserver(dbb9727f-81ce-4dc4-900b-5e7086236c76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797c87987f-628v2" podUID="dbb9727f-81ce-4dc4-900b-5e7086236c76" Sep 12 17:32:09.478172 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516-shm.mount: Deactivated successfully. Sep 12 17:32:09.537209 containerd[1469]: time="2025-09-12T17:32:09.537144417Z" level=error msg="Failed to destroy network for sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.537664 containerd[1469]: time="2025-09-12T17:32:09.537625890Z" level=error msg="encountered an error cleaning up failed sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.537769 containerd[1469]: time="2025-09-12T17:32:09.537732315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9fd4cb64f-4pbh9,Uid:afe9d8f3-858a-48bc-b6b4-9176e5274326,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.538267 kubelet[2562]: E0912 17:32:09.537991 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.538267 kubelet[2562]: E0912 17:32:09.538079 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9fd4cb64f-4pbh9" Sep 12 17:32:09.538267 kubelet[2562]: E0912 17:32:09.538109 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-9fd4cb64f-4pbh9" Sep 12 17:32:09.538434 kubelet[2562]: E0912 17:32:09.538184 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-9fd4cb64f-4pbh9_calico-system(afe9d8f3-858a-48bc-b6b4-9176e5274326)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-9fd4cb64f-4pbh9_calico-system(afe9d8f3-858a-48bc-b6b4-9176e5274326)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9fd4cb64f-4pbh9" podUID="afe9d8f3-858a-48bc-b6b4-9176e5274326" Sep 12 17:32:09.540172 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9-shm.mount: Deactivated successfully. Sep 12 17:32:09.620209 containerd[1469]: time="2025-09-12T17:32:09.620066498Z" level=error msg="Failed to destroy network for sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.620573 containerd[1469]: time="2025-09-12T17:32:09.620534335Z" level=error msg="encountered an error cleaning up failed sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.620616 containerd[1469]: time="2025-09-12T17:32:09.620594210Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d58b7c7df-c2h2g,Uid:f0219ed1-d2e0-4c42-9b74-ef9a21b8a523,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.620948 kubelet[2562]: E0912 17:32:09.620882 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:09.621344 kubelet[2562]: E0912 17:32:09.620967 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d58b7c7df-c2h2g" Sep 12 17:32:09.621344 kubelet[2562]: E0912 17:32:09.620993 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7d58b7c7df-c2h2g" Sep 12 17:32:09.621344 kubelet[2562]: E0912 17:32:09.621068 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7d58b7c7df-c2h2g_calico-system(f0219ed1-d2e0-4c42-9b74-ef9a21b8a523)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7d58b7c7df-c2h2g_calico-system(f0219ed1-d2e0-4c42-9b74-ef9a21b8a523)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d58b7c7df-c2h2g" podUID="f0219ed1-d2e0-4c42-9b74-ef9a21b8a523" Sep 12 17:32:09.624937 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c-shm.mount: Deactivated successfully. Sep 12 17:32:10.150124 containerd[1469]: time="2025-09-12T17:32:10.150042715Z" level=error msg="Failed to destroy network for sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:10.150689 containerd[1469]: time="2025-09-12T17:32:10.150649799Z" level=error msg="encountered an error cleaning up failed sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:10.150753 containerd[1469]: time="2025-09-12T17:32:10.150720384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797c87987f-th4cn,Uid:c62c609a-3cbb-45a5-ba08-4db418faacd8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:10.151094 kubelet[2562]: E0912 17:32:10.151017 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:10.151309 kubelet[2562]: E0912 17:32:10.151111 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797c87987f-th4cn" Sep 12 17:32:10.151309 kubelet[2562]: E0912 17:32:10.151139 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-797c87987f-th4cn" Sep 12 17:32:10.151309 kubelet[2562]: E0912 17:32:10.151207 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-797c87987f-th4cn_calico-apiserver(c62c609a-3cbb-45a5-ba08-4db418faacd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-797c87987f-th4cn_calico-apiserver(c62c609a-3cbb-45a5-ba08-4db418faacd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797c87987f-th4cn" podUID="c62c609a-3cbb-45a5-ba08-4db418faacd8" Sep 12 17:32:10.229102 kubelet[2562]: I0912 17:32:10.229063 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:10.229711 containerd[1469]: time="2025-09-12T17:32:10.229673378Z" level=info msg="StopPodSandbox for \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\"" Sep 12 17:32:10.230098 containerd[1469]: time="2025-09-12T17:32:10.229860947Z" level=info msg="Ensure that sandbox d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c in task-service has been cleanup successfully" Sep 12 17:32:10.231085 kubelet[2562]: I0912 17:32:10.231033 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:10.231733 containerd[1469]: time="2025-09-12T17:32:10.231655457Z" level=info msg="StopPodSandbox for \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\"" Sep 12 17:32:10.231855 containerd[1469]: time="2025-09-12T17:32:10.231818329Z" level=info msg="Ensure that sandbox f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2 in task-service has been cleanup successfully" Sep 12 17:32:10.233764 kubelet[2562]: I0912 17:32:10.233731 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:10.234448 containerd[1469]: time="2025-09-12T17:32:10.234406700Z" level=info msg="StopPodSandbox for \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\"" Sep 12 17:32:10.234800 containerd[1469]: time="2025-09-12T17:32:10.234644085Z" level=info msg="Ensure that sandbox 0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084 in task-service has been cleanup successfully" Sep 12 17:32:10.235706 kubelet[2562]: I0912 17:32:10.235680 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:10.237038 containerd[1469]: time="2025-09-12T17:32:10.237007285Z" level=info msg="StopPodSandbox for \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\"" Sep 12 17:32:10.237165 containerd[1469]: time="2025-09-12T17:32:10.237140230Z" level=info msg="Ensure that sandbox 227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e in task-service has been cleanup successfully" Sep 12 17:32:10.238135 kubelet[2562]: I0912 17:32:10.237781 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:10.238764 containerd[1469]: time="2025-09-12T17:32:10.238328518Z" level=info msg="StopPodSandbox for \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\"" Sep 12 17:32:10.238764 containerd[1469]: time="2025-09-12T17:32:10.238522059Z" level=info msg="Ensure that sandbox aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9 in task-service has been cleanup successfully" Sep 12 17:32:10.242360 kubelet[2562]: I0912 17:32:10.242332 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:10.243082 containerd[1469]: time="2025-09-12T17:32:10.243039157Z" level=info msg="StopPodSandbox for \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\"" Sep 12 17:32:10.243325 containerd[1469]: time="2025-09-12T17:32:10.243286552Z" level=info msg="Ensure that sandbox 19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516 in task-service has been cleanup successfully" Sep 12 17:32:10.285308 containerd[1469]: time="2025-09-12T17:32:10.285190902Z" level=error msg="StopPodSandbox for \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\" failed" error="failed to destroy network for sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:10.285859 kubelet[2562]: E0912 17:32:10.285666 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:10.285859 kubelet[2562]: E0912 17:32:10.285737 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c"} Sep 12 17:32:10.285859 kubelet[2562]: E0912 17:32:10.285778 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0219ed1-d2e0-4c42-9b74-ef9a21b8a523\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:10.285859 kubelet[2562]: E0912 17:32:10.285809 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0219ed1-d2e0-4c42-9b74-ef9a21b8a523\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7d58b7c7df-c2h2g" podUID="f0219ed1-d2e0-4c42-9b74-ef9a21b8a523" Sep 12 17:32:10.289158 containerd[1469]: time="2025-09-12T17:32:10.289090186Z" level=error msg="StopPodSandbox for \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\" failed" error="failed to destroy network for sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:10.290087 containerd[1469]: time="2025-09-12T17:32:10.290059153Z" level=error msg="StopPodSandbox for \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\" failed" error="failed to destroy network for sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:10.290136 kubelet[2562]: E0912 17:32:10.290039 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:10.290163 kubelet[2562]: E0912 17:32:10.290124 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e"} Sep 12 17:32:10.290194 kubelet[2562]: E0912 17:32:10.290179 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"da63dc76-0ae4-4dcd-9e39-e6b5230d815d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:10.290327 kubelet[2562]: E0912 17:32:10.290257 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"da63dc76-0ae4-4dcd-9e39-e6b5230d815d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-s5glj" podUID="da63dc76-0ae4-4dcd-9e39-e6b5230d815d" Sep 12 17:32:10.291407 kubelet[2562]: E0912 17:32:10.291360 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:10.291545 kubelet[2562]: E0912 17:32:10.291474 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084"} Sep 12 17:32:10.291545 kubelet[2562]: E0912 17:32:10.291502 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2fbe993a-426d-4181-874c-464b718119c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:10.291545 kubelet[2562]: E0912 17:32:10.291522 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2fbe993a-426d-4181-874c-464b718119c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mp9m5" podUID="2fbe993a-426d-4181-874c-464b718119c8" Sep 12 17:32:10.297339 containerd[1469]: time="2025-09-12T17:32:10.297294721Z" level=error msg="StopPodSandbox for \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\" failed" error="failed to destroy network for sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:10.297614 kubelet[2562]: E0912 17:32:10.297568 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:10.297649 kubelet[2562]: E0912 17:32:10.297634 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9"} Sep 12 17:32:10.297689 kubelet[2562]: E0912 17:32:10.297671 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"afe9d8f3-858a-48bc-b6b4-9176e5274326\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:10.297770 kubelet[2562]: E0912 17:32:10.297700 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"afe9d8f3-858a-48bc-b6b4-9176e5274326\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-9fd4cb64f-4pbh9" podUID="afe9d8f3-858a-48bc-b6b4-9176e5274326" Sep 12 17:32:10.350759 containerd[1469]: time="2025-09-12T17:32:10.350681942Z" level=error msg="StopPodSandbox for \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\" failed" error="failed to destroy network for sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:10.351019 kubelet[2562]: E0912 17:32:10.350966 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:10.351084 kubelet[2562]: E0912 17:32:10.351030 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516"} Sep 12 17:32:10.351084 kubelet[2562]: E0912 17:32:10.351074 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbb9727f-81ce-4dc4-900b-5e7086236c76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:10.351255 kubelet[2562]: E0912 17:32:10.351099 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbb9727f-81ce-4dc4-900b-5e7086236c76\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797c87987f-628v2" podUID="dbb9727f-81ce-4dc4-900b-5e7086236c76" Sep 12 17:32:10.351319 containerd[1469]: time="2025-09-12T17:32:10.351274147Z" level=error msg="StopPodSandbox for \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\" failed" error="failed to destroy network for sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:32:10.351430 kubelet[2562]: E0912 17:32:10.351401 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:10.351430 kubelet[2562]: E0912 17:32:10.351429 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2"} Sep 12 17:32:10.351544 kubelet[2562]: E0912 17:32:10.351454 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c62c609a-3cbb-45a5-ba08-4db418faacd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:32:10.351544 kubelet[2562]: E0912 17:32:10.351473 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c62c609a-3cbb-45a5-ba08-4db418faacd8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-797c87987f-th4cn" podUID="c62c609a-3cbb-45a5-ba08-4db418faacd8" Sep 12 17:32:10.468064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2-shm.mount: Deactivated successfully. Sep 12 17:32:17.141879 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:44900.service - OpenSSH per-connection server daemon (10.0.0.1:44900). Sep 12 17:32:17.306700 sshd[3728]: Accepted publickey for core from 10.0.0.1 port 44900 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:32:17.308724 sshd[3728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:17.318798 systemd-logind[1448]: New session 10 of user core. Sep 12 17:32:17.322368 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:32:17.623163 sshd[3728]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:17.628258 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:44900.service: Deactivated successfully. Sep 12 17:32:17.630920 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:32:17.631929 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:32:17.633870 systemd-logind[1448]: Removed session 10. Sep 12 17:32:17.686188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197119261.mount: Deactivated successfully. Sep 12 17:32:19.887242 containerd[1469]: time="2025-09-12T17:32:19.887103007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:19.931907 containerd[1469]: time="2025-09-12T17:32:19.931817624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:32:19.941159 containerd[1469]: time="2025-09-12T17:32:19.941112883Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:19.961814 containerd[1469]: time="2025-09-12T17:32:19.961723478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:19.962779 containerd[1469]: time="2025-09-12T17:32:19.962709739Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 11.672154879s" Sep 12 17:32:19.962840 containerd[1469]: time="2025-09-12T17:32:19.962778061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:32:20.085066 containerd[1469]: time="2025-09-12T17:32:20.084995231Z" level=info msg="CreateContainer within sandbox \"2ea2323c33b76534d439e7e880f37fcf7c93d5520aa19f04ce3f73a30b9435bc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:32:20.704748 kubelet[2562]: I0912 17:32:20.704668 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:32:20.705358 kubelet[2562]: E0912 17:32:20.705149 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:20.726077 containerd[1469]: time="2025-09-12T17:32:20.725994876Z" level=info msg="CreateContainer within sandbox \"2ea2323c33b76534d439e7e880f37fcf7c93d5520aa19f04ce3f73a30b9435bc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f3640aeb574b91fde429a3823686ddf913a0b252740934f4fff92f46f5730fc1\"" Sep 12 17:32:20.727391 containerd[1469]: time="2025-09-12T17:32:20.727206227Z" level=info msg="StartContainer for \"f3640aeb574b91fde429a3823686ddf913a0b252740934f4fff92f46f5730fc1\"" Sep 12 17:32:20.782353 systemd[1]: Started cri-containerd-f3640aeb574b91fde429a3823686ddf913a0b252740934f4fff92f46f5730fc1.scope - libcontainer container f3640aeb574b91fde429a3823686ddf913a0b252740934f4fff92f46f5730fc1. Sep 12 17:32:21.001549 containerd[1469]: time="2025-09-12T17:32:21.001432120Z" level=info msg="StartContainer for \"f3640aeb574b91fde429a3823686ddf913a0b252740934f4fff92f46f5730fc1\" returns successfully" Sep 12 17:32:21.015923 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:32:21.016006 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:32:21.265427 kubelet[2562]: E0912 17:32:21.264381 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:21.443300 kubelet[2562]: I0912 17:32:21.443104 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mt456" podStartSLOduration=2.82733946 podStartE2EDuration="28.443085193s" podCreationTimestamp="2025-09-12 17:31:53 +0000 UTC" firstStartedPulling="2025-09-12 17:31:54.34807655 +0000 UTC m=+20.562364183" lastFinishedPulling="2025-09-12 17:32:19.963822283 +0000 UTC m=+46.178109916" observedRunningTime="2025-09-12 17:32:21.442835577 +0000 UTC m=+47.657123220" watchObservedRunningTime="2025-09-12 17:32:21.443085193 +0000 UTC m=+47.657372826" Sep 12 17:32:21.832414 containerd[1469]: time="2025-09-12T17:32:21.832359874Z" level=info msg="StopPodSandbox for \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\"" Sep 12 17:32:21.893673 containerd[1469]: time="2025-09-12T17:32:21.893612759Z" level=info msg="StopPodSandbox for \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\"" Sep 12 17:32:21.894686 containerd[1469]: time="2025-09-12T17:32:21.894641029Z" level=info msg="StopPodSandbox for \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\"" Sep 12 17:32:21.894852 containerd[1469]: time="2025-09-12T17:32:21.894798861Z" level=info msg="StopPodSandbox for \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\"" Sep 12 17:32:22.641733 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:38412.service - OpenSSH per-connection server daemon (10.0.0.1:38412). Sep 12 17:32:22.684555 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 38412 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:32:22.686472 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:22.690948 systemd-logind[1448]: New session 11 of user core. Sep 12 17:32:22.699370 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:32:22.880351 sshd[3901]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:22.886040 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:38412.service: Deactivated successfully. Sep 12 17:32:22.890002 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:32:22.890852 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:32:22.892764 systemd-logind[1448]: Removed session 11. Sep 12 17:32:22.894735 containerd[1469]: time="2025-09-12T17:32:22.894352753Z" level=info msg="StopPodSandbox for \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\"" Sep 12 17:32:22.895548 containerd[1469]: time="2025-09-12T17:32:22.895145864Z" level=info msg="StopPodSandbox for \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\"" Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.198 [INFO][3811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.198 [INFO][3811] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" iface="eth0" netns="/var/run/netns/cni-0722953f-1bf1-990c-160b-b1b1ce59d877" Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.198 [INFO][3811] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" iface="eth0" netns="/var/run/netns/cni-0722953f-1bf1-990c-160b-b1b1ce59d877" Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.202 [INFO][3811] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" iface="eth0" netns="/var/run/netns/cni-0722953f-1bf1-990c-160b-b1b1ce59d877" Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.202 [INFO][3811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.202 [INFO][3811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.948 [INFO][3879] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" HandleID="k8s-pod-network.aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Workload="localhost-k8s-whisker--9fd4cb64f--4pbh9-eth0" Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.948 [INFO][3879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.948 [INFO][3879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.964 [WARNING][3879] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" HandleID="k8s-pod-network.aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Workload="localhost-k8s-whisker--9fd4cb64f--4pbh9-eth0" Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.966 [INFO][3879] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" HandleID="k8s-pod-network.aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Workload="localhost-k8s-whisker--9fd4cb64f--4pbh9-eth0" Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.991 [INFO][3879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:23.007671 containerd[1469]: 2025-09-12 17:32:22.999 [INFO][3811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:23.010381 containerd[1469]: time="2025-09-12T17:32:23.010322073Z" level=info msg="TearDown network for sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\" successfully" Sep 12 17:32:23.010710 containerd[1469]: time="2025-09-12T17:32:23.010554146Z" level=info msg="StopPodSandbox for \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\" returns successfully" Sep 12 17:32:23.012696 systemd[1]: run-netns-cni\x2d0722953f\x2d1bf1\x2d990c\x2d160b\x2db1b1ce59d877.mount: Deactivated successfully. Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:22.194 [INFO][3852] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:22.194 [INFO][3852] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" iface="eth0" netns="/var/run/netns/cni-cfe46d17-ed18-3460-749f-5e4c62f23eda" Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:22.201 [INFO][3852] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" iface="eth0" netns="/var/run/netns/cni-cfe46d17-ed18-3460-749f-5e4c62f23eda" Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:22.202 [INFO][3852] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" iface="eth0" netns="/var/run/netns/cni-cfe46d17-ed18-3460-749f-5e4c62f23eda" Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:22.202 [INFO][3852] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:22.202 [INFO][3852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:22.948 [INFO][3876] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" HandleID="k8s-pod-network.0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:22.948 [INFO][3876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:22.991 [INFO][3876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:23.035 [WARNING][3876] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" HandleID="k8s-pod-network.0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:23.035 [INFO][3876] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" HandleID="k8s-pod-network.0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:23.063 [INFO][3876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:23.083582 containerd[1469]: 2025-09-12 17:32:23.079 [INFO][3852] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:23.122961 containerd[1469]: time="2025-09-12T17:32:23.085294902Z" level=info msg="TearDown network for sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\" successfully" Sep 12 17:32:23.122961 containerd[1469]: time="2025-09-12T17:32:23.085330029Z" level=info msg="StopPodSandbox for \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\" returns successfully" Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:22.205 [INFO][3851] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:22.206 [INFO][3851] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" iface="eth0" netns="/var/run/netns/cni-1b005454-9a4f-90b0-39c5-503246fd1b6c" Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:22.206 [INFO][3851] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" iface="eth0" netns="/var/run/netns/cni-1b005454-9a4f-90b0-39c5-503246fd1b6c" Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:22.206 [INFO][3851] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" iface="eth0" netns="/var/run/netns/cni-1b005454-9a4f-90b0-39c5-503246fd1b6c" Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:22.206 [INFO][3851] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:22.206 [INFO][3851] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:22.948 [INFO][3882] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" HandleID="k8s-pod-network.227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:22.948 [INFO][3882] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:23.063 [INFO][3882] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:23.071 [WARNING][3882] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" HandleID="k8s-pod-network.227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:23.071 [INFO][3882] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" HandleID="k8s-pod-network.227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:23.073 [INFO][3882] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:23.122961 containerd[1469]: 2025-09-12 17:32:23.080 [INFO][3851] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:23.122961 containerd[1469]: time="2025-09-12T17:32:23.087503500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mp9m5,Uid:2fbe993a-426d-4181-874c-464b718119c8,Namespace:kube-system,Attempt:1,}" Sep 12 17:32:23.122961 containerd[1469]: time="2025-09-12T17:32:23.089138916Z" level=info msg="TearDown network for sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\" successfully" Sep 12 17:32:23.122961 containerd[1469]: time="2025-09-12T17:32:23.089158002Z" level=info msg="StopPodSandbox for \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\" returns successfully" Sep 12 17:32:23.122961 containerd[1469]: time="2025-09-12T17:32:23.092205319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-s5glj,Uid:da63dc76-0ae4-4dcd-9e39-e6b5230d815d,Namespace:calico-system,Attempt:1,}" Sep 12 17:32:23.088623 systemd[1]: run-netns-cni\x2dcfe46d17\x2ded18\x2d3460\x2d749f\x2d5e4c62f23eda.mount: Deactivated successfully. Sep 12 17:32:23.125142 kubelet[2562]: E0912 17:32:23.085791 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:23.125142 kubelet[2562]: I0912 17:32:23.118624 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj4g5\" (UniqueName: \"kubernetes.io/projected/afe9d8f3-858a-48bc-b6b4-9176e5274326-kube-api-access-nj4g5\") pod \"afe9d8f3-858a-48bc-b6b4-9176e5274326\" (UID: \"afe9d8f3-858a-48bc-b6b4-9176e5274326\") " Sep 12 17:32:23.125142 kubelet[2562]: I0912 17:32:23.118666 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/afe9d8f3-858a-48bc-b6b4-9176e5274326-whisker-backend-key-pair\") pod \"afe9d8f3-858a-48bc-b6b4-9176e5274326\" (UID: \"afe9d8f3-858a-48bc-b6b4-9176e5274326\") " Sep 12 17:32:23.125142 kubelet[2562]: I0912 17:32:23.118685 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe9d8f3-858a-48bc-b6b4-9176e5274326-whisker-ca-bundle\") pod \"afe9d8f3-858a-48bc-b6b4-9176e5274326\" (UID: \"afe9d8f3-858a-48bc-b6b4-9176e5274326\") " Sep 12 17:32:23.125142 kubelet[2562]: I0912 17:32:23.119125 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe9d8f3-858a-48bc-b6b4-9176e5274326-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "afe9d8f3-858a-48bc-b6b4-9176e5274326" (UID: "afe9d8f3-858a-48bc-b6b4-9176e5274326"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:32:23.125142 kubelet[2562]: I0912 17:32:23.124725 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/afe9d8f3-858a-48bc-b6b4-9176e5274326-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "afe9d8f3-858a-48bc-b6b4-9176e5274326" (UID: "afe9d8f3-858a-48bc-b6b4-9176e5274326"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:32:23.093580 systemd[1]: run-netns-cni\x2d1b005454\x2d9a4f\x2d90b0\x2d39c5\x2d503246fd1b6c.mount: Deactivated successfully. Sep 12 17:32:23.125838 kubelet[2562]: I0912 17:32:23.125401 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe9d8f3-858a-48bc-b6b4-9176e5274326-kube-api-access-nj4g5" (OuterVolumeSpecName: "kube-api-access-nj4g5") pod "afe9d8f3-858a-48bc-b6b4-9176e5274326" (UID: "afe9d8f3-858a-48bc-b6b4-9176e5274326"). InnerVolumeSpecName "kube-api-access-nj4g5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:32:23.126834 systemd[1]: var-lib-kubelet-pods-afe9d8f3\x2d858a\x2d48bc\x2db6b4\x2d9176e5274326-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnj4g5.mount: Deactivated successfully. Sep 12 17:32:23.127040 systemd[1]: var-lib-kubelet-pods-afe9d8f3\x2d858a\x2d48bc\x2db6b4\x2d9176e5274326-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:32:23.219962 kubelet[2562]: I0912 17:32:23.219905 2562 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nj4g5\" (UniqueName: \"kubernetes.io/projected/afe9d8f3-858a-48bc-b6b4-9176e5274326-kube-api-access-nj4g5\") on node \"localhost\" DevicePath \"\"" Sep 12 17:32:23.219962 kubelet[2562]: I0912 17:32:23.219950 2562 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/afe9d8f3-858a-48bc-b6b4-9176e5274326-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 17:32:23.219962 kubelet[2562]: I0912 17:32:23.219962 2562 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/afe9d8f3-858a-48bc-b6b4-9176e5274326-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 17:32:23.277296 systemd[1]: Removed slice kubepods-besteffort-podafe9d8f3_858a_48bc_b6b4_9176e5274326.slice - libcontainer container kubepods-besteffort-podafe9d8f3_858a_48bc_b6b4_9176e5274326.slice. Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:22.195 [INFO][3853] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:22.195 [INFO][3853] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" iface="eth0" netns="/var/run/netns/cni-21b6c399-1cf0-db23-d890-bdee134898b7" Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:22.201 [INFO][3853] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" iface="eth0" netns="/var/run/netns/cni-21b6c399-1cf0-db23-d890-bdee134898b7" Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:22.202 [INFO][3853] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" iface="eth0" netns="/var/run/netns/cni-21b6c399-1cf0-db23-d890-bdee134898b7" Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:22.202 [INFO][3853] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:22.202 [INFO][3853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:22.954 [INFO][3880] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" HandleID="k8s-pod-network.2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:22.954 [INFO][3880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:23.073 [INFO][3880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:23.265 [WARNING][3880] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" HandleID="k8s-pod-network.2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:23.265 [INFO][3880] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" HandleID="k8s-pod-network.2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:23.416 [INFO][3880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:23.427914 containerd[1469]: 2025-09-12 17:32:23.423 [INFO][3853] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:23.428448 containerd[1469]: time="2025-09-12T17:32:23.428107458Z" level=info msg="TearDown network for sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\" successfully" Sep 12 17:32:23.428448 containerd[1469]: time="2025-09-12T17:32:23.428138236Z" level=info msg="StopPodSandbox for \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\" returns successfully" Sep 12 17:32:23.428617 kubelet[2562]: E0912 17:32:23.428593 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:23.429071 containerd[1469]: time="2025-09-12T17:32:23.429023984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6bfmb,Uid:6f401e60-a51f-4ed0-8199-7c39a5b7cb6f,Namespace:kube-system,Attempt:1,}" Sep 12 17:32:23.430647 systemd[1]: run-netns-cni\x2d21b6c399\x2d1cf0\x2ddb23\x2dd890\x2dbdee134898b7.mount: Deactivated successfully. Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.035 [INFO][3940] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.038 [INFO][3940] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" iface="eth0" netns="/var/run/netns/cni-8a438c76-97a5-8c0b-c21a-eb60e56ec759" Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.038 [INFO][3940] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" iface="eth0" netns="/var/run/netns/cni-8a438c76-97a5-8c0b-c21a-eb60e56ec759" Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.038 [INFO][3940] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" iface="eth0" netns="/var/run/netns/cni-8a438c76-97a5-8c0b-c21a-eb60e56ec759" Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.039 [INFO][3940] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.039 [INFO][3940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.066 [INFO][3970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" HandleID="k8s-pod-network.f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.067 [INFO][3970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.416 [INFO][3970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.426 [WARNING][3970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" HandleID="k8s-pod-network.f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.426 [INFO][3970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" HandleID="k8s-pod-network.f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.447 [INFO][3970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:23.453907 containerd[1469]: 2025-09-12 17:32:23.451 [INFO][3940] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:23.454430 containerd[1469]: time="2025-09-12T17:32:23.454083744Z" level=info msg="TearDown network for sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\" successfully" Sep 12 17:32:23.454430 containerd[1469]: time="2025-09-12T17:32:23.454114353Z" level=info msg="StopPodSandbox for \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\" returns successfully" Sep 12 17:32:23.454944 containerd[1469]: time="2025-09-12T17:32:23.454919988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797c87987f-th4cn,Uid:c62c609a-3cbb-45a5-ba08-4db418faacd8,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.065 [INFO][3941] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.065 [INFO][3941] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" iface="eth0" netns="/var/run/netns/cni-a8c91704-5b1f-364a-81f5-1830b385a9ff" Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.072 [INFO][3941] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" iface="eth0" netns="/var/run/netns/cni-a8c91704-5b1f-364a-81f5-1830b385a9ff" Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.072 [INFO][3941] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" iface="eth0" netns="/var/run/netns/cni-a8c91704-5b1f-364a-81f5-1830b385a9ff" Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.072 [INFO][3941] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.072 [INFO][3941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.112 [INFO][3983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" HandleID="k8s-pod-network.19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.112 [INFO][3983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.448 [INFO][3983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.477 [WARNING][3983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" HandleID="k8s-pod-network.19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.478 [INFO][3983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" HandleID="k8s-pod-network.19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.480 [INFO][3983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:23.489369 containerd[1469]: 2025-09-12 17:32:23.486 [INFO][3941] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:23.489786 containerd[1469]: time="2025-09-12T17:32:23.489436988Z" level=info msg="TearDown network for sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\" successfully" Sep 12 17:32:23.489786 containerd[1469]: time="2025-09-12T17:32:23.489460994Z" level=info msg="StopPodSandbox for \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\" returns successfully" Sep 12 17:32:23.490178 containerd[1469]: time="2025-09-12T17:32:23.490158664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797c87987f-628v2,Uid:dbb9727f-81ce-4dc4-900b-5e7086236c76,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:32:23.549540 systemd[1]: Created slice kubepods-besteffort-pod49ab2e37_e5c0_49fd_a6a8_983821fc3534.slice - libcontainer container kubepods-besteffort-pod49ab2e37_e5c0_49fd_a6a8_983821fc3534.slice. Sep 12 17:32:23.623620 kubelet[2562]: I0912 17:32:23.623563 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49ab2e37-e5c0-49fd-a6a8-983821fc3534-whisker-ca-bundle\") pod \"whisker-5b89c964f6-g5w2p\" (UID: \"49ab2e37-e5c0-49fd-a6a8-983821fc3534\") " pod="calico-system/whisker-5b89c964f6-g5w2p" Sep 12 17:32:23.623620 kubelet[2562]: I0912 17:32:23.623615 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dbjq\" (UniqueName: \"kubernetes.io/projected/49ab2e37-e5c0-49fd-a6a8-983821fc3534-kube-api-access-9dbjq\") pod \"whisker-5b89c964f6-g5w2p\" (UID: \"49ab2e37-e5c0-49fd-a6a8-983821fc3534\") " pod="calico-system/whisker-5b89c964f6-g5w2p" Sep 12 17:32:23.623620 kubelet[2562]: I0912 17:32:23.623645 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/49ab2e37-e5c0-49fd-a6a8-983821fc3534-whisker-backend-key-pair\") pod \"whisker-5b89c964f6-g5w2p\" (UID: \"49ab2e37-e5c0-49fd-a6a8-983821fc3534\") " pod="calico-system/whisker-5b89c964f6-g5w2p" Sep 12 17:32:23.853303 containerd[1469]: time="2025-09-12T17:32:23.852952303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b89c964f6-g5w2p,Uid:49ab2e37-e5c0-49fd-a6a8-983821fc3534,Namespace:calico-system,Attempt:0,}" Sep 12 17:32:23.893832 containerd[1469]: time="2025-09-12T17:32:23.893768285Z" level=info msg="StopPodSandbox for \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\"" Sep 12 17:32:23.895483 kubelet[2562]: I0912 17:32:23.895443 2562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe9d8f3-858a-48bc-b6b4-9176e5274326" path="/var/lib/kubelet/pods/afe9d8f3-858a-48bc-b6b4-9176e5274326/volumes" Sep 12 17:32:24.024850 systemd[1]: run-netns-cni\x2d8a438c76\x2d97a5\x2d8c0b\x2dc21a\x2deb60e56ec759.mount: Deactivated successfully. Sep 12 17:32:24.024987 systemd[1]: run-netns-cni\x2da8c91704\x2d5b1f\x2d364a\x2d81f5\x2d1830b385a9ff.mount: Deactivated successfully. Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:23.956 [INFO][4007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:23.956 [INFO][4007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" iface="eth0" netns="/var/run/netns/cni-bc1ee7a9-bfc9-31e3-4d73-f0434e2d6a55" Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:23.956 [INFO][4007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" iface="eth0" netns="/var/run/netns/cni-bc1ee7a9-bfc9-31e3-4d73-f0434e2d6a55" Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:23.957 [INFO][4007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" iface="eth0" netns="/var/run/netns/cni-bc1ee7a9-bfc9-31e3-4d73-f0434e2d6a55" Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:23.957 [INFO][4007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:23.957 [INFO][4007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:23.983 [INFO][4057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" HandleID="k8s-pod-network.77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:23.983 [INFO][4057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:23.983 [INFO][4057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:23.992 [WARNING][4057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" HandleID="k8s-pod-network.77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:23.992 [INFO][4057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" HandleID="k8s-pod-network.77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:24.001 [INFO][4057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:24.028275 containerd[1469]: 2025-09-12 17:32:24.010 [INFO][4007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:24.030935 containerd[1469]: time="2025-09-12T17:32:24.030899534Z" level=info msg="TearDown network for sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\" successfully" Sep 12 17:32:24.030935 containerd[1469]: time="2025-09-12T17:32:24.030931134Z" level=info msg="StopPodSandbox for \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\" returns successfully" Sep 12 17:32:24.031990 containerd[1469]: time="2025-09-12T17:32:24.031764221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mbdtc,Uid:d8b105ba-edcc-41c9-a17f-5d76bf2daf67,Namespace:calico-system,Attempt:1,}" Sep 12 17:32:24.033261 systemd[1]: run-netns-cni\x2dbc1ee7a9\x2dbfc9\x2d31e3\x2d4d73\x2df0434e2d6a55.mount: Deactivated successfully. Sep 12 17:32:24.256497 systemd-networkd[1402]: cali850163cb961: Link UP Sep 12 17:32:24.256846 systemd-networkd[1402]: cali850163cb961: Gained carrier Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:23.956 [INFO][4027] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:23.985 [INFO][4027] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0 coredns-674b8bbfcf- kube-system 2fbe993a-426d-4181-874c-464b718119c8 967 0 2025-09-12 17:31:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-mp9m5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali850163cb961 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" Namespace="kube-system" Pod="coredns-674b8bbfcf-mp9m5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mp9m5-" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:23.985 [INFO][4027] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" Namespace="kube-system" Pod="coredns-674b8bbfcf-mp9m5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.044 [INFO][4082] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" HandleID="k8s-pod-network.937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.044 [INFO][4082] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" HandleID="k8s-pod-network.937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f6b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-mp9m5", "timestamp":"2025-09-12 17:32:24.044154962 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.044 [INFO][4082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.044 [INFO][4082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.044 [INFO][4082] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.093 [INFO][4082] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" host="localhost" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.113 [INFO][4082] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.120 [INFO][4082] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.128 [INFO][4082] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.134 [INFO][4082] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.134 [INFO][4082] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" host="localhost" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.139 [INFO][4082] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.161 [INFO][4082] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" host="localhost" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.216 [INFO][4082] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" host="localhost" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.219 [INFO][4082] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" host="localhost" Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.220 [INFO][4082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:24.430811 containerd[1469]: 2025-09-12 17:32:24.221 [INFO][4082] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" HandleID="k8s-pod-network.937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:24.431608 containerd[1469]: 2025-09-12 17:32:24.234 [INFO][4027] cni-plugin/k8s.go 418: Populated endpoint ContainerID="937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" Namespace="kube-system" Pod="coredns-674b8bbfcf-mp9m5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2fbe993a-426d-4181-874c-464b718119c8", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-mp9m5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali850163cb961", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:24.431608 containerd[1469]: 2025-09-12 17:32:24.234 [INFO][4027] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" Namespace="kube-system" Pod="coredns-674b8bbfcf-mp9m5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:24.431608 containerd[1469]: 2025-09-12 17:32:24.234 [INFO][4027] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali850163cb961 ContainerID="937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" Namespace="kube-system" Pod="coredns-674b8bbfcf-mp9m5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:24.431608 containerd[1469]: 2025-09-12 17:32:24.260 [INFO][4027] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" Namespace="kube-system" Pod="coredns-674b8bbfcf-mp9m5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:24.431608 containerd[1469]: 2025-09-12 17:32:24.260 [INFO][4027] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" Namespace="kube-system" Pod="coredns-674b8bbfcf-mp9m5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2fbe993a-426d-4181-874c-464b718119c8", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c", Pod:"coredns-674b8bbfcf-mp9m5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali850163cb961", MAC:"fa:02:d7:a9:06:29", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:24.431608 containerd[1469]: 2025-09-12 17:32:24.426 [INFO][4027] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c" Namespace="kube-system" Pod="coredns-674b8bbfcf-mp9m5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:24.526059 systemd-networkd[1402]: cali51ffd328968: Link UP Sep 12 17:32:24.526384 systemd-networkd[1402]: cali51ffd328968: Gained carrier Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:23.956 [INFO][4014] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:23.997 [INFO][4014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0 coredns-674b8bbfcf- kube-system 6f401e60-a51f-4ed0-8199-7c39a5b7cb6f 966 0 2025-09-12 17:31:40 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-6bfmb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali51ffd328968 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bfmb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6bfmb-" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:23.998 [INFO][4014] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bfmb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.048 [INFO][4089] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" HandleID="k8s-pod-network.60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.048 [INFO][4089] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" HandleID="k8s-pod-network.60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7bd0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-6bfmb", "timestamp":"2025-09-12 17:32:24.04812302 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.048 [INFO][4089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.220 [INFO][4089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.220 [INFO][4089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.233 [INFO][4089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" host="localhost" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.238 [INFO][4089] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.250 [INFO][4089] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.260 [INFO][4089] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.285 [INFO][4089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.286 [INFO][4089] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" host="localhost" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.425 [INFO][4089] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130 Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.442 [INFO][4089] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" host="localhost" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.517 [INFO][4089] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" host="localhost" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.517 [INFO][4089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" host="localhost" Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.517 [INFO][4089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:24.710761 containerd[1469]: 2025-09-12 17:32:24.517 [INFO][4089] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" HandleID="k8s-pod-network.60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:24.711582 containerd[1469]: 2025-09-12 17:32:24.522 [INFO][4014] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bfmb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6f401e60-a51f-4ed0-8199-7c39a5b7cb6f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-6bfmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51ffd328968", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:24.711582 containerd[1469]: 2025-09-12 17:32:24.523 [INFO][4014] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bfmb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:24.711582 containerd[1469]: 2025-09-12 17:32:24.523 [INFO][4014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51ffd328968 ContainerID="60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bfmb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:24.711582 containerd[1469]: 2025-09-12 17:32:24.526 [INFO][4014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bfmb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:24.711582 containerd[1469]: 2025-09-12 17:32:24.526 [INFO][4014] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bfmb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6f401e60-a51f-4ed0-8199-7c39a5b7cb6f", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130", Pod:"coredns-674b8bbfcf-6bfmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51ffd328968", MAC:"86:cf:b4:e8:98:d2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:24.711582 containerd[1469]: 2025-09-12 17:32:24.706 [INFO][4014] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130" Namespace="kube-system" Pod="coredns-674b8bbfcf-6bfmb" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:24.783263 kernel: bpftool[4288]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:32:24.895258 containerd[1469]: time="2025-09-12T17:32:24.893299067Z" level=info msg="StopPodSandbox for \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\"" Sep 12 17:32:24.922256 containerd[1469]: time="2025-09-12T17:32:24.914552770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:24.922256 containerd[1469]: time="2025-09-12T17:32:24.917267291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:24.922256 containerd[1469]: time="2025-09-12T17:32:24.917286407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:24.922256 containerd[1469]: time="2025-09-12T17:32:24.917400535Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:24.950442 systemd-networkd[1402]: calidba7edef898: Link UP Sep 12 17:32:24.967015 systemd-networkd[1402]: calidba7edef898: Gained carrier Sep 12 17:32:24.988529 systemd[1]: Started cri-containerd-937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c.scope - libcontainer container 937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c. Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.006 [INFO][4042] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.082 [INFO][4042] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--s5glj-eth0 goldmane-54d579b49d- calico-system da63dc76-0ae4-4dcd-9e39-e6b5230d815d 965 0 2025-09-12 17:31:53 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-s5glj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidba7edef898 [] [] }} ContainerID="a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" Namespace="calico-system" Pod="goldmane-54d579b49d-s5glj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--s5glj-" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.083 [INFO][4042] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" Namespace="calico-system" Pod="goldmane-54d579b49d-s5glj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.179 [INFO][4170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" HandleID="k8s-pod-network.a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.179 [INFO][4170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" HandleID="k8s-pod-network.a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000119ee0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-s5glj", "timestamp":"2025-09-12 17:32:24.17861622 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.179 [INFO][4170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.517 [INFO][4170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.518 [INFO][4170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.599 [INFO][4170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" host="localhost" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.706 [INFO][4170] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.716 [INFO][4170] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.721 [INFO][4170] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.725 [INFO][4170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.726 [INFO][4170] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" host="localhost" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.729 [INFO][4170] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363 Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.853 [INFO][4170] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" host="localhost" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.878 [INFO][4170] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" host="localhost" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.878 [INFO][4170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" host="localhost" Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.878 [INFO][4170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:24.998519 containerd[1469]: 2025-09-12 17:32:24.878 [INFO][4170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" HandleID="k8s-pod-network.a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:25.000482 containerd[1469]: 2025-09-12 17:32:24.921 [INFO][4042] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" Namespace="calico-system" Pod="goldmane-54d579b49d-s5glj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--s5glj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"da63dc76-0ae4-4dcd-9e39-e6b5230d815d", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-s5glj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidba7edef898", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:25.000482 containerd[1469]: 2025-09-12 17:32:24.921 [INFO][4042] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" Namespace="calico-system" Pod="goldmane-54d579b49d-s5glj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:25.000482 containerd[1469]: 2025-09-12 17:32:24.922 [INFO][4042] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidba7edef898 ContainerID="a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" Namespace="calico-system" Pod="goldmane-54d579b49d-s5glj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:25.000482 containerd[1469]: 2025-09-12 17:32:24.950 [INFO][4042] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" Namespace="calico-system" Pod="goldmane-54d579b49d-s5glj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:25.000482 containerd[1469]: 2025-09-12 17:32:24.950 [INFO][4042] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" Namespace="calico-system" Pod="goldmane-54d579b49d-s5glj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--s5glj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"da63dc76-0ae4-4dcd-9e39-e6b5230d815d", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363", Pod:"goldmane-54d579b49d-s5glj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidba7edef898", MAC:"9e:ff:8f:cb:e5:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:25.000482 containerd[1469]: 2025-09-12 17:32:24.993 [INFO][4042] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363" Namespace="calico-system" Pod="goldmane-54d579b49d-s5glj" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:25.025704 containerd[1469]: time="2025-09-12T17:32:25.025399171Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:25.025704 containerd[1469]: time="2025-09-12T17:32:25.025463784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:25.025704 containerd[1469]: time="2025-09-12T17:32:25.025478282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:25.025704 containerd[1469]: time="2025-09-12T17:32:25.025560929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:25.027693 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:32:25.075203 systemd[1]: Started cri-containerd-60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130.scope - libcontainer container 60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130. Sep 12 17:32:25.118656 systemd-networkd[1402]: cali34522919f41: Link UP Sep 12 17:32:25.120352 systemd-networkd[1402]: cali34522919f41: Gained carrier Sep 12 17:32:25.121126 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:32:25.133243 containerd[1469]: time="2025-09-12T17:32:25.131296179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mp9m5,Uid:2fbe993a-426d-4181-874c-464b718119c8,Namespace:kube-system,Attempt:1,} returns sandbox id \"937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c\"" Sep 12 17:32:25.133243 containerd[1469]: time="2025-09-12T17:32:25.132763614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:25.133243 containerd[1469]: time="2025-09-12T17:32:25.132839939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:25.133243 containerd[1469]: time="2025-09-12T17:32:25.132851310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:25.133243 containerd[1469]: time="2025-09-12T17:32:25.132939579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:25.135424 kubelet[2562]: E0912 17:32:25.135029 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:25.170643 containerd[1469]: time="2025-09-12T17:32:25.168935458Z" level=info msg="CreateContainer within sandbox \"937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:32:25.174260 containerd[1469]: time="2025-09-12T17:32:25.171709330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6bfmb,Uid:6f401e60-a51f-4ed0-8199-7c39a5b7cb6f,Namespace:kube-system,Attempt:1,} returns sandbox id \"60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130\"" Sep 12 17:32:25.175802 kubelet[2562]: E0912 17:32:25.175758 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:25.184062 systemd[1]: Started cri-containerd-a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363.scope - libcontainer container a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363. Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.008 [INFO][4065] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.079 [INFO][4065] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0 calico-apiserver-797c87987f- calico-apiserver c62c609a-3cbb-45a5-ba08-4db418faacd8 984 0 2025-09-12 17:31:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:797c87987f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-797c87987f-th4cn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali34522919f41 [] [] }} ContainerID="beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-th4cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--th4cn-" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.080 [INFO][4065] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-th4cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.215 [INFO][4158] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" HandleID="k8s-pod-network.beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.215 [INFO][4158] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" HandleID="k8s-pod-network.beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7bd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-797c87987f-th4cn", "timestamp":"2025-09-12 17:32:24.215026924 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.215 [INFO][4158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.881 [INFO][4158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.881 [INFO][4158] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.914 [INFO][4158] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" host="localhost" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.938 [INFO][4158] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.985 [INFO][4158] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.990 [INFO][4158] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.994 [INFO][4158] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.994 [INFO][4158] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" host="localhost" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:24.999 [INFO][4158] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1 Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:25.038 [INFO][4158] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" host="localhost" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:25.063 [INFO][4158] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" host="localhost" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:25.063 [INFO][4158] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" host="localhost" Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:25.063 [INFO][4158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:25.184675 containerd[1469]: 2025-09-12 17:32:25.063 [INFO][4158] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" HandleID="k8s-pod-network.beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:25.185653 containerd[1469]: 2025-09-12 17:32:25.094 [INFO][4065] cni-plugin/k8s.go 418: Populated endpoint ContainerID="beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-th4cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0", GenerateName:"calico-apiserver-797c87987f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c62c609a-3cbb-45a5-ba08-4db418faacd8", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797c87987f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-797c87987f-th4cn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34522919f41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:25.185653 containerd[1469]: 2025-09-12 17:32:25.094 [INFO][4065] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-th4cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:25.185653 containerd[1469]: 2025-09-12 17:32:25.094 [INFO][4065] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34522919f41 ContainerID="beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-th4cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:25.185653 containerd[1469]: 2025-09-12 17:32:25.119 [INFO][4065] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-th4cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:25.185653 containerd[1469]: 2025-09-12 17:32:25.121 [INFO][4065] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-th4cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0", GenerateName:"calico-apiserver-797c87987f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c62c609a-3cbb-45a5-ba08-4db418faacd8", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797c87987f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1", Pod:"calico-apiserver-797c87987f-th4cn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34522919f41", MAC:"22:48:14:0f:15:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:25.185653 containerd[1469]: 2025-09-12 17:32:25.170 [INFO][4065] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-th4cn" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:25.215069 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:32:25.241523 containerd[1469]: time="2025-09-12T17:32:25.241469841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-s5glj,Uid:da63dc76-0ae4-4dcd-9e39-e6b5230d815d,Namespace:calico-system,Attempt:1,} returns sandbox id \"a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363\"" Sep 12 17:32:25.243076 containerd[1469]: time="2025-09-12T17:32:25.243031435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:32:25.259409 containerd[1469]: time="2025-09-12T17:32:25.258937822Z" level=info msg="CreateContainer within sandbox \"60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:32:25.285668 systemd-networkd[1402]: vxlan.calico: Link UP Sep 12 17:32:25.285680 systemd-networkd[1402]: vxlan.calico: Gained carrier Sep 12 17:32:25.471548 systemd-networkd[1402]: calid7552d2363b: Link UP Sep 12 17:32:25.473568 systemd-networkd[1402]: calid7552d2363b: Gained carrier Sep 12 17:32:25.477321 containerd[1469]: time="2025-09-12T17:32:25.477195667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:25.477321 containerd[1469]: time="2025-09-12T17:32:25.477298072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:25.477321 containerd[1469]: time="2025-09-12T17:32:25.477312820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:25.479244 containerd[1469]: time="2025-09-12T17:32:25.478135927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.158 [INFO][4319] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.159 [INFO][4319] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" iface="eth0" netns="/var/run/netns/cni-1d8568bc-367e-9852-6086-384b8f83d8a9" Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.159 [INFO][4319] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" iface="eth0" netns="/var/run/netns/cni-1d8568bc-367e-9852-6086-384b8f83d8a9" Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.160 [INFO][4319] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" iface="eth0" netns="/var/run/netns/cni-1d8568bc-367e-9852-6086-384b8f83d8a9" Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.160 [INFO][4319] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.160 [INFO][4319] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.256 [INFO][4465] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" HandleID="k8s-pod-network.d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.256 [INFO][4465] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.453 [INFO][4465] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.486 [WARNING][4465] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" HandleID="k8s-pod-network.d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.486 [INFO][4465] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" HandleID="k8s-pod-network.d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.492 [INFO][4465] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:25.501516 containerd[1469]: 2025-09-12 17:32:25.496 [INFO][4319] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:25.502283 containerd[1469]: time="2025-09-12T17:32:25.502234583Z" level=info msg="TearDown network for sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\" successfully" Sep 12 17:32:25.502362 containerd[1469]: time="2025-09-12T17:32:25.502348100Z" level=info msg="StopPodSandbox for \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\" returns successfully" Sep 12 17:32:25.503157 containerd[1469]: time="2025-09-12T17:32:25.503134196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d58b7c7df-c2h2g,Uid:f0219ed1-d2e0-4c42-9b74-ef9a21b8a523,Namespace:calico-system,Attempt:1,}" Sep 12 17:32:25.507781 systemd[1]: Started cri-containerd-beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1.scope - libcontainer container beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1. Sep 12 17:32:25.526176 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:24.117 [INFO][4102] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:24.143 [INFO][4102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--797c87987f--628v2-eth0 calico-apiserver-797c87987f- calico-apiserver dbb9727f-81ce-4dc4-900b-5e7086236c76 985 0 2025-09-12 17:31:50 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:797c87987f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-797c87987f-628v2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid7552d2363b [] [] }} ContainerID="7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-628v2" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--628v2-" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:24.143 [INFO][4102] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-628v2" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:24.215 [INFO][4218] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" HandleID="k8s-pod-network.7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:24.215 [INFO][4218] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" HandleID="k8s-pod-network.7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-797c87987f-628v2", "timestamp":"2025-09-12 17:32:24.215297019 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:24.215 [INFO][4218] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.065 [INFO][4218] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.066 [INFO][4218] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.108 [INFO][4218] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" host="localhost" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.115 [INFO][4218] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.124 [INFO][4218] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.126 [INFO][4218] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.176 [INFO][4218] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.176 [INFO][4218] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" host="localhost" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.192 [INFO][4218] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27 Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.291 [INFO][4218] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" host="localhost" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.453 [INFO][4218] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" host="localhost" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.453 [INFO][4218] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" host="localhost" Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.453 [INFO][4218] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:25.546286 containerd[1469]: 2025-09-12 17:32:25.453 [INFO][4218] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" HandleID="k8s-pod-network.7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:25.547023 containerd[1469]: 2025-09-12 17:32:25.461 [INFO][4102] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-628v2" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797c87987f--628v2-eth0", GenerateName:"calico-apiserver-797c87987f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbb9727f-81ce-4dc4-900b-5e7086236c76", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797c87987f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-797c87987f-628v2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7552d2363b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:25.547023 containerd[1469]: 2025-09-12 17:32:25.461 [INFO][4102] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-628v2" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:25.547023 containerd[1469]: 2025-09-12 17:32:25.461 [INFO][4102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7552d2363b ContainerID="7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-628v2" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:25.547023 containerd[1469]: 2025-09-12 17:32:25.476 [INFO][4102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-628v2" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:25.547023 containerd[1469]: 2025-09-12 17:32:25.478 [INFO][4102] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-628v2" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797c87987f--628v2-eth0", GenerateName:"calico-apiserver-797c87987f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbb9727f-81ce-4dc4-900b-5e7086236c76", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797c87987f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27", Pod:"calico-apiserver-797c87987f-628v2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7552d2363b", MAC:"4e:2e:72:ec:71:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:25.547023 containerd[1469]: 2025-09-12 17:32:25.541 [INFO][4102] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27" Namespace="calico-apiserver" Pod="calico-apiserver-797c87987f-628v2" WorkloadEndpoint="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:25.561239 containerd[1469]: time="2025-09-12T17:32:25.561168891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797c87987f-th4cn,Uid:c62c609a-3cbb-45a5-ba08-4db418faacd8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1\"" Sep 12 17:32:25.745610 containerd[1469]: time="2025-09-12T17:32:25.744902197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:25.745610 containerd[1469]: time="2025-09-12T17:32:25.744985235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:25.745610 containerd[1469]: time="2025-09-12T17:32:25.745001667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:25.745610 containerd[1469]: time="2025-09-12T17:32:25.745408731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:25.766638 systemd[1]: Started cri-containerd-7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27.scope - libcontainer container 7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27. Sep 12 17:32:25.773302 systemd-networkd[1402]: calif8ce2a25a42: Link UP Sep 12 17:32:25.774481 systemd-networkd[1402]: calif8ce2a25a42: Gained carrier Sep 12 17:32:25.795146 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:32:25.829015 containerd[1469]: time="2025-09-12T17:32:25.828947247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-797c87987f-628v2,Uid:dbb9727f-81ce-4dc4-900b-5e7086236c76,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27\"" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.192 [INFO][4370] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b89c964f6--g5w2p-eth0 whisker-5b89c964f6- calico-system 49ab2e37-e5c0-49fd-a6a8-983821fc3534 1004 0 2025-09-12 17:32:23 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b89c964f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b89c964f6-g5w2p eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif8ce2a25a42 [] [] }} ContainerID="a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" Namespace="calico-system" Pod="whisker-5b89c964f6-g5w2p" WorkloadEndpoint="localhost-k8s-whisker--5b89c964f6--g5w2p-" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.193 [INFO][4370] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" Namespace="calico-system" Pod="whisker-5b89c964f6-g5w2p" WorkloadEndpoint="localhost-k8s-whisker--5b89c964f6--g5w2p-eth0" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.331 [INFO][4510] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" HandleID="k8s-pod-network.a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" Workload="localhost-k8s-whisker--5b89c964f6--g5w2p-eth0" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.332 [INFO][4510] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" HandleID="k8s-pod-network.a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" Workload="localhost-k8s-whisker--5b89c964f6--g5w2p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a5420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b89c964f6-g5w2p", "timestamp":"2025-09-12 17:32:25.331881504 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.332 [INFO][4510] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.492 [INFO][4510] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.492 [INFO][4510] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.540 [INFO][4510] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" host="localhost" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.548 [INFO][4510] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.607 [INFO][4510] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.620 [INFO][4510] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.622 [INFO][4510] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.623 [INFO][4510] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" host="localhost" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.627 [INFO][4510] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.740 [INFO][4510] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" host="localhost" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.764 [INFO][4510] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" host="localhost" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.764 [INFO][4510] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" host="localhost" Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.764 [INFO][4510] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:25.927020 containerd[1469]: 2025-09-12 17:32:25.764 [INFO][4510] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" HandleID="k8s-pod-network.a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" Workload="localhost-k8s-whisker--5b89c964f6--g5w2p-eth0" Sep 12 17:32:25.927985 containerd[1469]: 2025-09-12 17:32:25.768 [INFO][4370] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" Namespace="calico-system" Pod="whisker-5b89c964f6-g5w2p" WorkloadEndpoint="localhost-k8s-whisker--5b89c964f6--g5w2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b89c964f6--g5w2p-eth0", GenerateName:"whisker-5b89c964f6-", Namespace:"calico-system", SelfLink:"", UID:"49ab2e37-e5c0-49fd-a6a8-983821fc3534", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b89c964f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b89c964f6-g5w2p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif8ce2a25a42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:25.927985 containerd[1469]: 2025-09-12 17:32:25.769 [INFO][4370] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" Namespace="calico-system" Pod="whisker-5b89c964f6-g5w2p" WorkloadEndpoint="localhost-k8s-whisker--5b89c964f6--g5w2p-eth0" Sep 12 17:32:25.927985 containerd[1469]: 2025-09-12 17:32:25.769 [INFO][4370] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8ce2a25a42 ContainerID="a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" Namespace="calico-system" Pod="whisker-5b89c964f6-g5w2p" WorkloadEndpoint="localhost-k8s-whisker--5b89c964f6--g5w2p-eth0" Sep 12 17:32:25.927985 containerd[1469]: 2025-09-12 17:32:25.775 [INFO][4370] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" Namespace="calico-system" Pod="whisker-5b89c964f6-g5w2p" WorkloadEndpoint="localhost-k8s-whisker--5b89c964f6--g5w2p-eth0" Sep 12 17:32:25.927985 containerd[1469]: 2025-09-12 17:32:25.776 [INFO][4370] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" Namespace="calico-system" Pod="whisker-5b89c964f6-g5w2p" WorkloadEndpoint="localhost-k8s-whisker--5b89c964f6--g5w2p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b89c964f6--g5w2p-eth0", GenerateName:"whisker-5b89c964f6-", Namespace:"calico-system", SelfLink:"", UID:"49ab2e37-e5c0-49fd-a6a8-983821fc3534", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 32, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b89c964f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a", Pod:"whisker-5b89c964f6-g5w2p", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif8ce2a25a42", MAC:"56:e4:8f:f4:6e:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:25.927985 containerd[1469]: 2025-09-12 17:32:25.923 [INFO][4370] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a" Namespace="calico-system" Pod="whisker-5b89c964f6-g5w2p" WorkloadEndpoint="localhost-k8s-whisker--5b89c964f6--g5w2p-eth0" Sep 12 17:32:25.973494 containerd[1469]: time="2025-09-12T17:32:25.973413776Z" level=info msg="CreateContainer within sandbox \"937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8a4e5027319c85365dd0c7a4c03f77174696340cb7641d40f727f94eb16a0afc\"" Sep 12 17:32:25.975334 containerd[1469]: time="2025-09-12T17:32:25.975158007Z" level=info msg="StartContainer for \"8a4e5027319c85365dd0c7a4c03f77174696340cb7641d40f727f94eb16a0afc\"" Sep 12 17:32:26.004109 systemd-networkd[1402]: cali8899e86381b: Link UP Sep 12 17:32:26.009588 systemd-networkd[1402]: cali8899e86381b: Gained carrier Sep 12 17:32:26.018793 systemd[1]: run-netns-cni\x2d1d8568bc\x2d367e\x2d9852\x2d6086\x2d384b8f83d8a9.mount: Deactivated successfully. Sep 12 17:32:26.027440 systemd-networkd[1402]: cali850163cb961: Gained IPv6LL Sep 12 17:32:26.037423 systemd[1]: Started cri-containerd-8a4e5027319c85365dd0c7a4c03f77174696340cb7641d40f727f94eb16a0afc.scope - libcontainer container 8a4e5027319c85365dd0c7a4c03f77174696340cb7641d40f727f94eb16a0afc. Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.190 [INFO][4341] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mbdtc-eth0 csi-node-driver- calico-system d8b105ba-edcc-41c9-a17f-5d76bf2daf67 1007 0 2025-09-12 17:31:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mbdtc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8899e86381b [] [] }} ContainerID="6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" Namespace="calico-system" Pod="csi-node-driver-mbdtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--mbdtc-" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.190 [INFO][4341] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" Namespace="calico-system" Pod="csi-node-driver-mbdtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.332 [INFO][4511] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" HandleID="k8s-pod-network.6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.332 [INFO][4511] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" HandleID="k8s-pod-network.6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mbdtc", "timestamp":"2025-09-12 17:32:25.329941109 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.332 [INFO][4511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.764 [INFO][4511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.765 [INFO][4511] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.779 [INFO][4511] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" host="localhost" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.924 [INFO][4511] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.937 [INFO][4511] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.939 [INFO][4511] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.941 [INFO][4511] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.941 [INFO][4511] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" host="localhost" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.943 [INFO][4511] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2 Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.962 [INFO][4511] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" host="localhost" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.996 [INFO][4511] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" host="localhost" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.996 [INFO][4511] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" host="localhost" Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.996 [INFO][4511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:26.175724 containerd[1469]: 2025-09-12 17:32:25.996 [INFO][4511] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" HandleID="k8s-pod-network.6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:26.176617 containerd[1469]: 2025-09-12 17:32:26.000 [INFO][4341] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" Namespace="calico-system" Pod="csi-node-driver-mbdtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--mbdtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mbdtc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d8b105ba-edcc-41c9-a17f-5d76bf2daf67", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mbdtc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8899e86381b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:26.176617 containerd[1469]: 2025-09-12 17:32:26.000 [INFO][4341] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" Namespace="calico-system" Pod="csi-node-driver-mbdtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:26.176617 containerd[1469]: 2025-09-12 17:32:26.000 [INFO][4341] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8899e86381b ContainerID="6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" Namespace="calico-system" Pod="csi-node-driver-mbdtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:26.176617 containerd[1469]: 2025-09-12 17:32:26.008 [INFO][4341] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" Namespace="calico-system" Pod="csi-node-driver-mbdtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:26.176617 containerd[1469]: 2025-09-12 17:32:26.013 [INFO][4341] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" Namespace="calico-system" Pod="csi-node-driver-mbdtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--mbdtc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mbdtc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d8b105ba-edcc-41c9-a17f-5d76bf2daf67", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2", Pod:"csi-node-driver-mbdtc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8899e86381b", MAC:"e2:d3:5a:85:2a:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:26.176617 containerd[1469]: 2025-09-12 17:32:26.172 [INFO][4341] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2" Namespace="calico-system" Pod="csi-node-driver-mbdtc" WorkloadEndpoint="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:26.219856 systemd-networkd[1402]: calidba7edef898: Gained IPv6LL Sep 12 17:32:26.251434 containerd[1469]: time="2025-09-12T17:32:26.251309356Z" level=info msg="StartContainer for \"8a4e5027319c85365dd0c7a4c03f77174696340cb7641d40f727f94eb16a0afc\" returns successfully" Sep 12 17:32:26.295508 kubelet[2562]: E0912 17:32:26.293914 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:26.355588 kubelet[2562]: I0912 17:32:26.355118 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mp9m5" podStartSLOduration=46.355099314 podStartE2EDuration="46.355099314s" podCreationTimestamp="2025-09-12 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:26.354962143 +0000 UTC m=+52.569249796" watchObservedRunningTime="2025-09-12 17:32:26.355099314 +0000 UTC m=+52.569386948" Sep 12 17:32:26.373240 containerd[1469]: time="2025-09-12T17:32:26.373088496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:26.373240 containerd[1469]: time="2025-09-12T17:32:26.373151556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:26.373240 containerd[1469]: time="2025-09-12T17:32:26.373165172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:26.373461 containerd[1469]: time="2025-09-12T17:32:26.373277726Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:26.406457 systemd[1]: Started cri-containerd-a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a.scope - libcontainer container a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a. Sep 12 17:32:26.411449 systemd-networkd[1402]: cali34522919f41: Gained IPv6LL Sep 12 17:32:26.412183 systemd-networkd[1402]: cali51ffd328968: Gained IPv6LL Sep 12 17:32:26.422605 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:32:26.453536 containerd[1469]: time="2025-09-12T17:32:26.453479587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b89c964f6-g5w2p,Uid:49ab2e37-e5c0-49fd-a6a8-983821fc3534,Namespace:calico-system,Attempt:0,} returns sandbox id \"a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a\"" Sep 12 17:32:26.464013 containerd[1469]: time="2025-09-12T17:32:26.463324106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:26.464013 containerd[1469]: time="2025-09-12T17:32:26.463968673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:26.464013 containerd[1469]: time="2025-09-12T17:32:26.463983120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:26.464271 containerd[1469]: time="2025-09-12T17:32:26.464090524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:26.488472 systemd[1]: Started cri-containerd-6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2.scope - libcontainer container 6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2. Sep 12 17:32:26.506512 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:32:26.517088 containerd[1469]: time="2025-09-12T17:32:26.517007990Z" level=info msg="CreateContainer within sandbox \"60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"809714d8dcbbb3ccc71904f8934e09f5cc91022c1cd3b0a134f64719117458c8\"" Sep 12 17:32:26.519256 containerd[1469]: time="2025-09-12T17:32:26.518236609Z" level=info msg="StartContainer for \"809714d8dcbbb3ccc71904f8934e09f5cc91022c1cd3b0a134f64719117458c8\"" Sep 12 17:32:26.529243 containerd[1469]: time="2025-09-12T17:32:26.529177615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mbdtc,Uid:d8b105ba-edcc-41c9-a17f-5d76bf2daf67,Namespace:calico-system,Attempt:1,} returns sandbox id \"6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2\"" Sep 12 17:32:26.552389 systemd[1]: Started cri-containerd-809714d8dcbbb3ccc71904f8934e09f5cc91022c1cd3b0a134f64719117458c8.scope - libcontainer container 809714d8dcbbb3ccc71904f8934e09f5cc91022c1cd3b0a134f64719117458c8. Sep 12 17:32:26.657068 containerd[1469]: time="2025-09-12T17:32:26.656988637Z" level=info msg="StartContainer for \"809714d8dcbbb3ccc71904f8934e09f5cc91022c1cd3b0a134f64719117458c8\" returns successfully" Sep 12 17:32:26.694685 systemd-networkd[1402]: califff6ced00ff: Link UP Sep 12 17:32:26.695031 systemd-networkd[1402]: califff6ced00ff: Gained carrier Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.566 [INFO][4807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0 calico-kube-controllers-7d58b7c7df- calico-system f0219ed1-d2e0-4c42-9b74-ef9a21b8a523 1023 0 2025-09-12 17:31:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7d58b7c7df projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7d58b7c7df-c2h2g eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califff6ced00ff [] [] }} ContainerID="35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" Namespace="calico-system" Pod="calico-kube-controllers-7d58b7c7df-c2h2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.567 [INFO][4807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" Namespace="calico-system" Pod="calico-kube-controllers-7d58b7c7df-c2h2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.611 [INFO][4854] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" HandleID="k8s-pod-network.35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.611 [INFO][4854] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" HandleID="k8s-pod-network.35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7d58b7c7df-c2h2g", "timestamp":"2025-09-12 17:32:26.611385924 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.611 [INFO][4854] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.611 [INFO][4854] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.611 [INFO][4854] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.617 [INFO][4854] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" host="localhost" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.622 [INFO][4854] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.626 [INFO][4854] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.628 [INFO][4854] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.630 [INFO][4854] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.630 [INFO][4854] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" host="localhost" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.632 [INFO][4854] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914 Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.668 [INFO][4854] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" host="localhost" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.687 [INFO][4854] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" host="localhost" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.687 [INFO][4854] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" host="localhost" Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.687 [INFO][4854] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:26.741632 containerd[1469]: 2025-09-12 17:32:26.687 [INFO][4854] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" HandleID="k8s-pod-network.35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:26.742381 containerd[1469]: 2025-09-12 17:32:26.691 [INFO][4807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" Namespace="calico-system" Pod="calico-kube-controllers-7d58b7c7df-c2h2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0", GenerateName:"calico-kube-controllers-7d58b7c7df-", Namespace:"calico-system", SelfLink:"", UID:"f0219ed1-d2e0-4c42-9b74-ef9a21b8a523", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d58b7c7df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7d58b7c7df-c2h2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califff6ced00ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:26.742381 containerd[1469]: 2025-09-12 17:32:26.691 [INFO][4807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" Namespace="calico-system" Pod="calico-kube-controllers-7d58b7c7df-c2h2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:26.742381 containerd[1469]: 2025-09-12 17:32:26.691 [INFO][4807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califff6ced00ff ContainerID="35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" Namespace="calico-system" Pod="calico-kube-controllers-7d58b7c7df-c2h2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:26.742381 containerd[1469]: 2025-09-12 17:32:26.695 [INFO][4807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" Namespace="calico-system" Pod="calico-kube-controllers-7d58b7c7df-c2h2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:26.742381 containerd[1469]: 2025-09-12 17:32:26.696 [INFO][4807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" Namespace="calico-system" Pod="calico-kube-controllers-7d58b7c7df-c2h2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0", GenerateName:"calico-kube-controllers-7d58b7c7df-", Namespace:"calico-system", SelfLink:"", UID:"f0219ed1-d2e0-4c42-9b74-ef9a21b8a523", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d58b7c7df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914", Pod:"calico-kube-controllers-7d58b7c7df-c2h2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califff6ced00ff", MAC:"8a:a0:47:23:18:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:26.742381 containerd[1469]: 2025-09-12 17:32:26.737 [INFO][4807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914" Namespace="calico-system" Pod="calico-kube-controllers-7d58b7c7df-c2h2g" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:26.888360 containerd[1469]: time="2025-09-12T17:32:26.888113302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:32:26.888360 containerd[1469]: time="2025-09-12T17:32:26.888180609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:32:26.888360 containerd[1469]: time="2025-09-12T17:32:26.888197441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:26.889132 containerd[1469]: time="2025-09-12T17:32:26.888313893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:32:26.911420 systemd[1]: Started cri-containerd-35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914.scope - libcontainer container 35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914. Sep 12 17:32:26.933927 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:32:26.962752 containerd[1469]: time="2025-09-12T17:32:26.962687299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7d58b7c7df-c2h2g,Uid:f0219ed1-d2e0-4c42-9b74-ef9a21b8a523,Namespace:calico-system,Attempt:1,} returns sandbox id \"35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914\"" Sep 12 17:32:27.050429 systemd-networkd[1402]: cali8899e86381b: Gained IPv6LL Sep 12 17:32:27.114470 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Sep 12 17:32:27.301947 kubelet[2562]: E0912 17:32:27.301801 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:27.305702 kubelet[2562]: E0912 17:32:27.305674 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:27.425122 kubelet[2562]: I0912 17:32:27.424681 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6bfmb" podStartSLOduration=47.424659415 podStartE2EDuration="47.424659415s" podCreationTimestamp="2025-09-12 17:31:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:27.42414712 +0000 UTC m=+53.638434753" watchObservedRunningTime="2025-09-12 17:32:27.424659415 +0000 UTC m=+53.638947048" Sep 12 17:32:27.434493 systemd-networkd[1402]: calif8ce2a25a42: Gained IPv6LL Sep 12 17:32:27.498546 systemd-networkd[1402]: calid7552d2363b: Gained IPv6LL Sep 12 17:32:27.882513 systemd-networkd[1402]: califff6ced00ff: Gained IPv6LL Sep 12 17:32:27.891801 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:38416.service - OpenSSH per-connection server daemon (10.0.0.1:38416). Sep 12 17:32:28.044544 sshd[4934]: Accepted publickey for core from 10.0.0.1 port 38416 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:32:28.046888 sshd[4934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:28.051638 systemd-logind[1448]: New session 12 of user core. Sep 12 17:32:28.060382 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:32:28.304721 kubelet[2562]: E0912 17:32:28.304686 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:28.304721 kubelet[2562]: E0912 17:32:28.304716 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:28.326071 sshd[4934]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:28.330380 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:38416.service: Deactivated successfully. Sep 12 17:32:28.332668 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:32:28.333600 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:32:28.334931 systemd-logind[1448]: Removed session 12. Sep 12 17:32:29.309307 kubelet[2562]: E0912 17:32:29.309205 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:29.312339 kubelet[2562]: E0912 17:32:29.310634 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:31.008759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2616471224.mount: Deactivated successfully. Sep 12 17:32:33.341776 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:60382.service - OpenSSH per-connection server daemon (10.0.0.1:60382). Sep 12 17:32:33.410465 containerd[1469]: time="2025-09-12T17:32:33.410389754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:33.425762 sshd[4973]: Accepted publickey for core from 10.0.0.1 port 60382 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:32:33.427859 containerd[1469]: time="2025-09-12T17:32:33.427807718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:32:33.450195 sshd[4973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:33.456126 systemd-logind[1448]: New session 13 of user core. Sep 12 17:32:33.464379 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:32:33.504077 containerd[1469]: time="2025-09-12T17:32:33.504017745Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:33.531561 containerd[1469]: time="2025-09-12T17:32:33.531396730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:33.532777 containerd[1469]: time="2025-09-12T17:32:33.532743247Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 8.28966801s" Sep 12 17:32:33.532777 containerd[1469]: time="2025-09-12T17:32:33.532777993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:32:33.534589 containerd[1469]: time="2025-09-12T17:32:33.534402228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:32:33.642311 containerd[1469]: time="2025-09-12T17:32:33.642052905Z" level=info msg="CreateContainer within sandbox \"a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:32:33.728341 sshd[4973]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:33.733064 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:60382.service: Deactivated successfully. Sep 12 17:32:33.736205 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:32:33.737291 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:32:33.738513 systemd-logind[1448]: Removed session 13. Sep 12 17:32:33.881934 containerd[1469]: time="2025-09-12T17:32:33.881888209Z" level=info msg="StopPodSandbox for \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\"" Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.947 [WARNING][4997] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" WorkloadEndpoint="localhost-k8s-whisker--9fd4cb64f--4pbh9-eth0" Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.948 [INFO][4997] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.948 [INFO][4997] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" iface="eth0" netns="" Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.948 [INFO][4997] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.948 [INFO][4997] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.972 [INFO][5008] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" HandleID="k8s-pod-network.aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Workload="localhost-k8s-whisker--9fd4cb64f--4pbh9-eth0" Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.972 [INFO][5008] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.972 [INFO][5008] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.978 [WARNING][5008] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" HandleID="k8s-pod-network.aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Workload="localhost-k8s-whisker--9fd4cb64f--4pbh9-eth0" Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.978 [INFO][5008] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" HandleID="k8s-pod-network.aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Workload="localhost-k8s-whisker--9fd4cb64f--4pbh9-eth0" Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.983 [INFO][5008] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:33.997926 containerd[1469]: 2025-09-12 17:32:33.986 [INFO][4997] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:34.077566 containerd[1469]: time="2025-09-12T17:32:33.997976242Z" level=info msg="TearDown network for sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\" successfully" Sep 12 17:32:34.077566 containerd[1469]: time="2025-09-12T17:32:33.998004546Z" level=info msg="StopPodSandbox for \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\" returns successfully" Sep 12 17:32:34.081039 containerd[1469]: time="2025-09-12T17:32:34.080969292Z" level=info msg="RemovePodSandbox for \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\"" Sep 12 17:32:34.083227 containerd[1469]: time="2025-09-12T17:32:34.083197975Z" level=info msg="Forcibly stopping sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\"" Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.120 [WARNING][5027] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" WorkloadEndpoint="localhost-k8s-whisker--9fd4cb64f--4pbh9-eth0" Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.120 [INFO][5027] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.120 [INFO][5027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" iface="eth0" netns="" Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.120 [INFO][5027] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.120 [INFO][5027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.145 [INFO][5036] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" HandleID="k8s-pod-network.aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Workload="localhost-k8s-whisker--9fd4cb64f--4pbh9-eth0" Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.145 [INFO][5036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.145 [INFO][5036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.153 [WARNING][5036] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" HandleID="k8s-pod-network.aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Workload="localhost-k8s-whisker--9fd4cb64f--4pbh9-eth0" Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.153 [INFO][5036] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" HandleID="k8s-pod-network.aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Workload="localhost-k8s-whisker--9fd4cb64f--4pbh9-eth0" Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.155 [INFO][5036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:34.161532 containerd[1469]: 2025-09-12 17:32:34.158 [INFO][5027] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9" Sep 12 17:32:34.162005 containerd[1469]: time="2025-09-12T17:32:34.161581460Z" level=info msg="TearDown network for sandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\" successfully" Sep 12 17:32:34.646632 containerd[1469]: time="2025-09-12T17:32:34.646555628Z" level=info msg="CreateContainer within sandbox \"a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fdfb99f2b802cbf4c733286e7e2c028cedce8491d2134c894c9067109d22d9d9\"" Sep 12 17:32:34.647491 containerd[1469]: time="2025-09-12T17:32:34.647443795Z" level=info msg="StartContainer for \"fdfb99f2b802cbf4c733286e7e2c028cedce8491d2134c894c9067109d22d9d9\"" Sep 12 17:32:34.689368 systemd[1]: Started cri-containerd-fdfb99f2b802cbf4c733286e7e2c028cedce8491d2134c894c9067109d22d9d9.scope - libcontainer container fdfb99f2b802cbf4c733286e7e2c028cedce8491d2134c894c9067109d22d9d9. Sep 12 17:32:34.995464 containerd[1469]: time="2025-09-12T17:32:34.995410414Z" level=info msg="StartContainer for \"fdfb99f2b802cbf4c733286e7e2c028cedce8491d2134c894c9067109d22d9d9\" returns successfully" Sep 12 17:32:35.122956 containerd[1469]: time="2025-09-12T17:32:35.122849523Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:32:35.122956 containerd[1469]: time="2025-09-12T17:32:35.122933613Z" level=info msg="RemovePodSandbox \"aa2d9ba14e5f5dce2423d66feab3910296ce3ef53bceaf2bf1c6632bd75a49e9\" returns successfully" Sep 12 17:32:35.123706 containerd[1469]: time="2025-09-12T17:32:35.123517592Z" level=info msg="StopPodSandbox for \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\"" Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.273 [WARNING][5093] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--s5glj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"da63dc76-0ae4-4dcd-9e39-e6b5230d815d", ResourceVersion:"1019", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363", Pod:"goldmane-54d579b49d-s5glj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidba7edef898", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.273 [INFO][5093] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.273 [INFO][5093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" iface="eth0" netns="" Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.273 [INFO][5093] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.273 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.297 [INFO][5102] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" HandleID="k8s-pod-network.227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.297 [INFO][5102] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.297 [INFO][5102] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.315 [WARNING][5102] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" HandleID="k8s-pod-network.227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.315 [INFO][5102] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" HandleID="k8s-pod-network.227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.317 [INFO][5102] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:35.323795 containerd[1469]: 2025-09-12 17:32:35.320 [INFO][5093] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:35.323795 containerd[1469]: time="2025-09-12T17:32:35.323718835Z" level=info msg="TearDown network for sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\" successfully" Sep 12 17:32:35.323795 containerd[1469]: time="2025-09-12T17:32:35.323741728Z" level=info msg="StopPodSandbox for \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\" returns successfully" Sep 12 17:32:35.324452 containerd[1469]: time="2025-09-12T17:32:35.324339523Z" level=info msg="RemovePodSandbox for \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\"" Sep 12 17:32:35.324452 containerd[1469]: time="2025-09-12T17:32:35.324362867Z" level=info msg="Forcibly stopping sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\"" Sep 12 17:32:35.434332 kubelet[2562]: I0912 17:32:35.434230 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-s5glj" podStartSLOduration=34.143149211 podStartE2EDuration="42.434196712s" podCreationTimestamp="2025-09-12 17:31:53 +0000 UTC" firstStartedPulling="2025-09-12 17:32:25.242791117 +0000 UTC m=+51.457078750" lastFinishedPulling="2025-09-12 17:32:33.533838618 +0000 UTC m=+59.748126251" observedRunningTime="2025-09-12 17:32:35.433534315 +0000 UTC m=+61.647821958" watchObservedRunningTime="2025-09-12 17:32:35.434196712 +0000 UTC m=+61.648484346" Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.440 [WARNING][5119] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--s5glj-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"da63dc76-0ae4-4dcd-9e39-e6b5230d815d", ResourceVersion:"1119", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a1143631dbebe1f24a7bfb1574c024ba7921f1d137c08e48f3380f16ffeb3363", Pod:"goldmane-54d579b49d-s5glj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidba7edef898", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.441 [INFO][5119] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.441 [INFO][5119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" iface="eth0" netns="" Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.441 [INFO][5119] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.441 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.520 [INFO][5146] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" HandleID="k8s-pod-network.227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.521 [INFO][5146] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.521 [INFO][5146] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.716 [WARNING][5146] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" HandleID="k8s-pod-network.227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.716 [INFO][5146] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" HandleID="k8s-pod-network.227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Workload="localhost-k8s-goldmane--54d579b49d--s5glj-eth0" Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.739 [INFO][5146] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:35.746001 containerd[1469]: 2025-09-12 17:32:35.742 [INFO][5119] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e" Sep 12 17:32:35.746893 containerd[1469]: time="2025-09-12T17:32:35.746031738Z" level=info msg="TearDown network for sandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\" successfully" Sep 12 17:32:36.125282 containerd[1469]: time="2025-09-12T17:32:36.125052552Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:32:36.125282 containerd[1469]: time="2025-09-12T17:32:36.125146981Z" level=info msg="RemovePodSandbox \"227db2e32a446dc445f0851119970b9f3d91fd21043ffa996602a7923e3a272e\" returns successfully" Sep 12 17:32:36.125987 containerd[1469]: time="2025-09-12T17:32:36.125756228Z" level=info msg="StopPodSandbox for \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\"" Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.186 [WARNING][5177] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797c87987f--628v2-eth0", GenerateName:"calico-apiserver-797c87987f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbb9727f-81ce-4dc4-900b-5e7086236c76", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797c87987f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27", Pod:"calico-apiserver-797c87987f-628v2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7552d2363b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.187 [INFO][5177] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.187 [INFO][5177] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" iface="eth0" netns="" Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.187 [INFO][5177] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.187 [INFO][5177] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.284 [INFO][5185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" HandleID="k8s-pod-network.19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.284 [INFO][5185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.284 [INFO][5185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.291 [WARNING][5185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" HandleID="k8s-pod-network.19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.291 [INFO][5185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" HandleID="k8s-pod-network.19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.292 [INFO][5185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:36.298030 containerd[1469]: 2025-09-12 17:32:36.295 [INFO][5177] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:36.298703 containerd[1469]: time="2025-09-12T17:32:36.298644998Z" level=info msg="TearDown network for sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\" successfully" Sep 12 17:32:36.298703 containerd[1469]: time="2025-09-12T17:32:36.298679804Z" level=info msg="StopPodSandbox for \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\" returns successfully" Sep 12 17:32:36.299291 containerd[1469]: time="2025-09-12T17:32:36.299263291Z" level=info msg="RemovePodSandbox for \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\"" Sep 12 17:32:36.299380 containerd[1469]: time="2025-09-12T17:32:36.299295974Z" level=info msg="Forcibly stopping sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\"" Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.346 [WARNING][5203] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797c87987f--628v2-eth0", GenerateName:"calico-apiserver-797c87987f-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbb9727f-81ce-4dc4-900b-5e7086236c76", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797c87987f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27", Pod:"calico-apiserver-797c87987f-628v2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid7552d2363b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.346 [INFO][5203] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.346 [INFO][5203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" iface="eth0" netns="" Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.347 [INFO][5203] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.347 [INFO][5203] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.372 [INFO][5227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" HandleID="k8s-pod-network.19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.372 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.372 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.381 [WARNING][5227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" HandleID="k8s-pod-network.19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.381 [INFO][5227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" HandleID="k8s-pod-network.19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Workload="localhost-k8s-calico--apiserver--797c87987f--628v2-eth0" Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.382 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:36.389053 containerd[1469]: 2025-09-12 17:32:36.385 [INFO][5203] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516" Sep 12 17:32:36.389053 containerd[1469]: time="2025-09-12T17:32:36.389007604Z" level=info msg="TearDown network for sandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\" successfully" Sep 12 17:32:36.466095 containerd[1469]: time="2025-09-12T17:32:36.466031916Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:32:36.466340 containerd[1469]: time="2025-09-12T17:32:36.466115615Z" level=info msg="RemovePodSandbox \"19b621b12127fe08741ada7333c9945d3ae29c9ca03670231410a13ab0fca516\" returns successfully" Sep 12 17:32:36.466806 containerd[1469]: time="2025-09-12T17:32:36.466767232Z" level=info msg="StopPodSandbox for \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\"" Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.505 [WARNING][5251] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0", GenerateName:"calico-kube-controllers-7d58b7c7df-", Namespace:"calico-system", SelfLink:"", UID:"f0219ed1-d2e0-4c42-9b74-ef9a21b8a523", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d58b7c7df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914", Pod:"calico-kube-controllers-7d58b7c7df-c2h2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califff6ced00ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.505 [INFO][5251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.505 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" iface="eth0" netns="" Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.505 [INFO][5251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.505 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.525 [INFO][5259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" HandleID="k8s-pod-network.d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.525 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.525 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.532 [WARNING][5259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" HandleID="k8s-pod-network.d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.532 [INFO][5259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" HandleID="k8s-pod-network.d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.534 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:36.540241 containerd[1469]: 2025-09-12 17:32:36.537 [INFO][5251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:36.540668 containerd[1469]: time="2025-09-12T17:32:36.540276089Z" level=info msg="TearDown network for sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\" successfully" Sep 12 17:32:36.540668 containerd[1469]: time="2025-09-12T17:32:36.540303732Z" level=info msg="StopPodSandbox for \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\" returns successfully" Sep 12 17:32:36.540860 containerd[1469]: time="2025-09-12T17:32:36.540830322Z" level=info msg="RemovePodSandbox for \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\"" Sep 12 17:32:36.540917 containerd[1469]: time="2025-09-12T17:32:36.540865759Z" level=info msg="Forcibly stopping sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\"" Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.578 [WARNING][5276] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0", GenerateName:"calico-kube-controllers-7d58b7c7df-", Namespace:"calico-system", SelfLink:"", UID:"f0219ed1-d2e0-4c42-9b74-ef9a21b8a523", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7d58b7c7df", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914", Pod:"calico-kube-controllers-7d58b7c7df-c2h2g", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califff6ced00ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.578 [INFO][5276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.578 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" iface="eth0" netns="" Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.578 [INFO][5276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.578 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.615 [INFO][5285] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" HandleID="k8s-pod-network.d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.616 [INFO][5285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.616 [INFO][5285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.622 [WARNING][5285] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" HandleID="k8s-pod-network.d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.622 [INFO][5285] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" HandleID="k8s-pod-network.d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Workload="localhost-k8s-calico--kube--controllers--7d58b7c7df--c2h2g-eth0" Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.623 [INFO][5285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:36.629845 containerd[1469]: 2025-09-12 17:32:36.626 [INFO][5276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c" Sep 12 17:32:36.630296 containerd[1469]: time="2025-09-12T17:32:36.629890717Z" level=info msg="TearDown network for sandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\" successfully" Sep 12 17:32:36.698497 containerd[1469]: time="2025-09-12T17:32:36.698394377Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:32:36.698497 containerd[1469]: time="2025-09-12T17:32:36.698495759Z" level=info msg="RemovePodSandbox \"d8593cc93661b57aa9d4a20e6181f0e2480c0af49e4f2389db490a8d728f786c\" returns successfully" Sep 12 17:32:36.699139 containerd[1469]: time="2025-09-12T17:32:36.699077112Z" level=info msg="StopPodSandbox for \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\"" Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.738 [WARNING][5302] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0", GenerateName:"calico-apiserver-797c87987f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c62c609a-3cbb-45a5-ba08-4db418faacd8", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797c87987f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1", Pod:"calico-apiserver-797c87987f-th4cn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34522919f41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.738 [INFO][5302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.738 [INFO][5302] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" iface="eth0" netns="" Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.738 [INFO][5302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.738 [INFO][5302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.760 [INFO][5311] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" HandleID="k8s-pod-network.f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.761 [INFO][5311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.761 [INFO][5311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.767 [WARNING][5311] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" HandleID="k8s-pod-network.f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.767 [INFO][5311] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" HandleID="k8s-pod-network.f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.769 [INFO][5311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:36.776024 containerd[1469]: 2025-09-12 17:32:36.772 [INFO][5302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:36.777104 containerd[1469]: time="2025-09-12T17:32:36.776072118Z" level=info msg="TearDown network for sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\" successfully" Sep 12 17:32:36.777104 containerd[1469]: time="2025-09-12T17:32:36.776107625Z" level=info msg="StopPodSandbox for \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\" returns successfully" Sep 12 17:32:36.777104 containerd[1469]: time="2025-09-12T17:32:36.776677646Z" level=info msg="RemovePodSandbox for \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\"" Sep 12 17:32:36.777104 containerd[1469]: time="2025-09-12T17:32:36.776716380Z" level=info msg="Forcibly stopping sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\"" Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.813 [WARNING][5330] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0", GenerateName:"calico-apiserver-797c87987f-", Namespace:"calico-apiserver", SelfLink:"", UID:"c62c609a-3cbb-45a5-ba08-4db418faacd8", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"797c87987f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1", Pod:"calico-apiserver-797c87987f-th4cn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali34522919f41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.813 [INFO][5330] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.813 [INFO][5330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" iface="eth0" netns="" Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.813 [INFO][5330] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.813 [INFO][5330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.834 [INFO][5341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" HandleID="k8s-pod-network.f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.835 [INFO][5341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.835 [INFO][5341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.840 [WARNING][5341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" HandleID="k8s-pod-network.f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.840 [INFO][5341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" HandleID="k8s-pod-network.f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Workload="localhost-k8s-calico--apiserver--797c87987f--th4cn-eth0" Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.846 [INFO][5341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:36.853314 containerd[1469]: 2025-09-12 17:32:36.849 [INFO][5330] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2" Sep 12 17:32:36.853789 containerd[1469]: time="2025-09-12T17:32:36.853347435Z" level=info msg="TearDown network for sandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\" successfully" Sep 12 17:32:36.958339 containerd[1469]: time="2025-09-12T17:32:36.957838697Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:32:36.958339 containerd[1469]: time="2025-09-12T17:32:36.957917176Z" level=info msg="RemovePodSandbox \"f088e0ad8eb4e7a873647637fdb907823dafa1332ea83d823bc184b20a73a6d2\" returns successfully" Sep 12 17:32:36.959098 containerd[1469]: time="2025-09-12T17:32:36.959061788Z" level=info msg="StopPodSandbox for \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\"" Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:36.997 [WARNING][5359] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6f401e60-a51f-4ed0-8199-7c39a5b7cb6f", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130", Pod:"coredns-674b8bbfcf-6bfmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51ffd328968", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:36.998 [INFO][5359] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:36.998 [INFO][5359] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" iface="eth0" netns="" Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:36.998 [INFO][5359] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:36.998 [INFO][5359] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:37.020 [INFO][5368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" HandleID="k8s-pod-network.2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:37.020 [INFO][5368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:37.020 [INFO][5368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:37.027 [WARNING][5368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" HandleID="k8s-pod-network.2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:37.027 [INFO][5368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" HandleID="k8s-pod-network.2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:37.028 [INFO][5368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:37.034804 containerd[1469]: 2025-09-12 17:32:37.031 [INFO][5359] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:37.034804 containerd[1469]: time="2025-09-12T17:32:37.034723277Z" level=info msg="TearDown network for sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\" successfully" Sep 12 17:32:37.034804 containerd[1469]: time="2025-09-12T17:32:37.034751431Z" level=info msg="StopPodSandbox for \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\" returns successfully" Sep 12 17:32:37.035368 containerd[1469]: time="2025-09-12T17:32:37.035324789Z" level=info msg="RemovePodSandbox for \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\"" Sep 12 17:32:37.035368 containerd[1469]: time="2025-09-12T17:32:37.035356199Z" level=info msg="Forcibly stopping sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\"" Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.111 [WARNING][5386] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6f401e60-a51f-4ed0-8199-7c39a5b7cb6f", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"60041c9f0903de329f570bd8642d7aab4ab5ff46446a35e4f9be02be1e5de130", Pod:"coredns-674b8bbfcf-6bfmb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali51ffd328968", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.111 [INFO][5386] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.111 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" iface="eth0" netns="" Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.111 [INFO][5386] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.111 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.132 [INFO][5394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" HandleID="k8s-pod-network.2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.132 [INFO][5394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.132 [INFO][5394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.140 [WARNING][5394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" HandleID="k8s-pod-network.2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.140 [INFO][5394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" HandleID="k8s-pod-network.2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Workload="localhost-k8s-coredns--674b8bbfcf--6bfmb-eth0" Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.142 [INFO][5394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:37.153139 containerd[1469]: 2025-09-12 17:32:37.146 [INFO][5386] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197" Sep 12 17:32:37.153139 containerd[1469]: time="2025-09-12T17:32:37.150866934Z" level=info msg="TearDown network for sandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\" successfully" Sep 12 17:32:37.157921 containerd[1469]: time="2025-09-12T17:32:37.157872347Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:32:37.157921 containerd[1469]: time="2025-09-12T17:32:37.157933322Z" level=info msg="RemovePodSandbox \"2df1eacb8ad2d15e448a45e3fdd6c02f2ce12dd1a6d7b2f94af7d1a51b9d5197\" returns successfully" Sep 12 17:32:37.158609 containerd[1469]: time="2025-09-12T17:32:37.158550514Z" level=info msg="StopPodSandbox for \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\"" Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.199 [WARNING][5411] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mbdtc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d8b105ba-edcc-41c9-a17f-5d76bf2daf67", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2", Pod:"csi-node-driver-mbdtc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8899e86381b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.199 [INFO][5411] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.199 [INFO][5411] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" iface="eth0" netns="" Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.199 [INFO][5411] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.199 [INFO][5411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.223 [INFO][5420] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" HandleID="k8s-pod-network.77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.224 [INFO][5420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.224 [INFO][5420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.231 [WARNING][5420] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" HandleID="k8s-pod-network.77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.231 [INFO][5420] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" HandleID="k8s-pod-network.77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.233 [INFO][5420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:37.240131 containerd[1469]: 2025-09-12 17:32:37.236 [INFO][5411] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:37.240131 containerd[1469]: time="2025-09-12T17:32:37.239800477Z" level=info msg="TearDown network for sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\" successfully" Sep 12 17:32:37.240131 containerd[1469]: time="2025-09-12T17:32:37.239834933Z" level=info msg="StopPodSandbox for \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\" returns successfully" Sep 12 17:32:37.242364 containerd[1469]: time="2025-09-12T17:32:37.242320900Z" level=info msg="RemovePodSandbox for \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\"" Sep 12 17:32:37.242364 containerd[1469]: time="2025-09-12T17:32:37.242358992Z" level=info msg="Forcibly stopping sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\"" Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.283 [WARNING][5439] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mbdtc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d8b105ba-edcc-41c9-a17f-5d76bf2daf67", ResourceVersion:"1046", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2", Pod:"csi-node-driver-mbdtc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8899e86381b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.284 [INFO][5439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.284 [INFO][5439] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" iface="eth0" netns="" Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.284 [INFO][5439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.284 [INFO][5439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.307 [INFO][5448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" HandleID="k8s-pod-network.77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.307 [INFO][5448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.308 [INFO][5448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.314 [WARNING][5448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" HandleID="k8s-pod-network.77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.314 [INFO][5448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" HandleID="k8s-pod-network.77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Workload="localhost-k8s-csi--node--driver--mbdtc-eth0" Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.315 [INFO][5448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:37.322726 containerd[1469]: 2025-09-12 17:32:37.318 [INFO][5439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523" Sep 12 17:32:37.323281 containerd[1469]: time="2025-09-12T17:32:37.322776616Z" level=info msg="TearDown network for sandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\" successfully" Sep 12 17:32:38.704595 containerd[1469]: time="2025-09-12T17:32:38.704510611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:32:38.705246 containerd[1469]: time="2025-09-12T17:32:38.704620110Z" level=info msg="RemovePodSandbox \"77f8ea2bf8c4cf29b4b3692445c8873b327833f3b5be2ceca063195bef8ed523\" returns successfully" Sep 12 17:32:38.705246 containerd[1469]: time="2025-09-12T17:32:38.705172398Z" level=info msg="StopPodSandbox for \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\"" Sep 12 17:32:38.749522 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:60394.service - OpenSSH per-connection server daemon (10.0.0.1:60394). Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.744 [WARNING][5465] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2fbe993a-426d-4181-874c-464b718119c8", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c", Pod:"coredns-674b8bbfcf-mp9m5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali850163cb961", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.744 [INFO][5465] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.744 [INFO][5465] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" iface="eth0" netns="" Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.744 [INFO][5465] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.744 [INFO][5465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.772 [INFO][5477] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" HandleID="k8s-pod-network.0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.773 [INFO][5477] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.773 [INFO][5477] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.780 [WARNING][5477] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" HandleID="k8s-pod-network.0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.780 [INFO][5477] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" HandleID="k8s-pod-network.0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.782 [INFO][5477] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:38.789382 containerd[1469]: 2025-09-12 17:32:38.786 [INFO][5465] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:38.789984 containerd[1469]: time="2025-09-12T17:32:38.789420412Z" level=info msg="TearDown network for sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\" successfully" Sep 12 17:32:38.789984 containerd[1469]: time="2025-09-12T17:32:38.789446793Z" level=info msg="StopPodSandbox for \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\" returns successfully" Sep 12 17:32:38.790173 containerd[1469]: time="2025-09-12T17:32:38.790134608Z" level=info msg="RemovePodSandbox for \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\"" Sep 12 17:32:38.790288 containerd[1469]: time="2025-09-12T17:32:38.790173371Z" level=info msg="Forcibly stopping sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\"" Sep 12 17:32:38.853943 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 60394 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:32:38.856732 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:38.863365 systemd-logind[1448]: New session 14 of user core. Sep 12 17:32:38.874984 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.837 [WARNING][5498] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"2fbe993a-426d-4181-874c-464b718119c8", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 31, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"937a200bff0aaa2a5614e690d72bea8e3ec264847997c04edcb120561088414c", Pod:"coredns-674b8bbfcf-mp9m5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali850163cb961", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.837 [INFO][5498] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.837 [INFO][5498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" iface="eth0" netns="" Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.837 [INFO][5498] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.837 [INFO][5498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.869 [INFO][5508] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" HandleID="k8s-pod-network.0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.869 [INFO][5508] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.869 [INFO][5508] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.877 [WARNING][5508] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" HandleID="k8s-pod-network.0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.878 [INFO][5508] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" HandleID="k8s-pod-network.0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Workload="localhost-k8s-coredns--674b8bbfcf--mp9m5-eth0" Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.880 [INFO][5508] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:32:38.888425 containerd[1469]: 2025-09-12 17:32:38.884 [INFO][5498] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084" Sep 12 17:32:38.888883 containerd[1469]: time="2025-09-12T17:32:38.888464868Z" level=info msg="TearDown network for sandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\" successfully" Sep 12 17:32:39.017871 containerd[1469]: time="2025-09-12T17:32:39.017651559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:32:39.017871 containerd[1469]: time="2025-09-12T17:32:39.017742872Z" level=info msg="RemovePodSandbox \"0d7d0510c2160cc145cd6ca5bc84418fa5b36c5e36ad250e8bc05bc1086d2084\" returns successfully" Sep 12 17:32:39.033500 sshd[5475]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:39.040727 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:60394.service: Deactivated successfully. Sep 12 17:32:39.043653 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:32:39.044946 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:32:39.046767 systemd-logind[1448]: Removed session 14. Sep 12 17:32:40.114571 containerd[1469]: time="2025-09-12T17:32:40.114505966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:40.145582 containerd[1469]: time="2025-09-12T17:32:40.145535084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:32:40.211373 containerd[1469]: time="2025-09-12T17:32:40.211306834Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:40.260601 containerd[1469]: time="2025-09-12T17:32:40.260521667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:40.261508 containerd[1469]: time="2025-09-12T17:32:40.261476468Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 6.727043271s" Sep 12 17:32:40.261572 containerd[1469]: time="2025-09-12T17:32:40.261509891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:32:40.262635 containerd[1469]: time="2025-09-12T17:32:40.262606781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:32:40.326138 containerd[1469]: time="2025-09-12T17:32:40.326086834Z" level=info msg="CreateContainer within sandbox \"beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:32:40.694931 containerd[1469]: time="2025-09-12T17:32:40.694869054Z" level=info msg="CreateContainer within sandbox \"beda38e2ed2a6dba9372ce698f3ae44051219459883b374bcd04aaa17079a2c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5bf391a1a4ce1ed5318c32910f23b4f0072b10c7642179bfc8bb9f0b40992d06\"" Sep 12 17:32:40.695681 containerd[1469]: time="2025-09-12T17:32:40.695496383Z" level=info msg="StartContainer for \"5bf391a1a4ce1ed5318c32910f23b4f0072b10c7642179bfc8bb9f0b40992d06\"" Sep 12 17:32:40.755368 systemd[1]: Started cri-containerd-5bf391a1a4ce1ed5318c32910f23b4f0072b10c7642179bfc8bb9f0b40992d06.scope - libcontainer container 5bf391a1a4ce1ed5318c32910f23b4f0072b10c7642179bfc8bb9f0b40992d06. Sep 12 17:32:41.205082 containerd[1469]: time="2025-09-12T17:32:41.204855899Z" level=info msg="StartContainer for \"5bf391a1a4ce1ed5318c32910f23b4f0072b10c7642179bfc8bb9f0b40992d06\" returns successfully" Sep 12 17:32:41.331794 containerd[1469]: time="2025-09-12T17:32:41.331722637Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:41.341168 containerd[1469]: time="2025-09-12T17:32:41.341057394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:32:41.343341 containerd[1469]: time="2025-09-12T17:32:41.343312900Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 1.080672706s" Sep 12 17:32:41.343398 containerd[1469]: time="2025-09-12T17:32:41.343345732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:32:41.344824 containerd[1469]: time="2025-09-12T17:32:41.344576025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:32:41.354403 containerd[1469]: time="2025-09-12T17:32:41.354334836Z" level=info msg="CreateContainer within sandbox \"7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:32:41.376951 kubelet[2562]: I0912 17:32:41.376518 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-797c87987f-th4cn" podStartSLOduration=36.676661141 podStartE2EDuration="51.376499146s" podCreationTimestamp="2025-09-12 17:31:50 +0000 UTC" firstStartedPulling="2025-09-12 17:32:25.562621296 +0000 UTC m=+51.776908929" lastFinishedPulling="2025-09-12 17:32:40.262459301 +0000 UTC m=+66.476746934" observedRunningTime="2025-09-12 17:32:41.376067938 +0000 UTC m=+67.590355571" watchObservedRunningTime="2025-09-12 17:32:41.376499146 +0000 UTC m=+67.590786779" Sep 12 17:32:41.427460 containerd[1469]: time="2025-09-12T17:32:41.427393042Z" level=info msg="CreateContainer within sandbox \"7aa3a6a1582b236e4fe4047bd83c2e0c93535fddfeffa43c9a72ceb39ec96e27\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6653213b3b8f6e8cfceeb5671c5fcfe6d46751b865b7c751dbc77939e33562d1\"" Sep 12 17:32:41.428492 containerd[1469]: time="2025-09-12T17:32:41.428445588Z" level=info msg="StartContainer for \"6653213b3b8f6e8cfceeb5671c5fcfe6d46751b865b7c751dbc77939e33562d1\"" Sep 12 17:32:41.463355 systemd[1]: Started cri-containerd-6653213b3b8f6e8cfceeb5671c5fcfe6d46751b865b7c751dbc77939e33562d1.scope - libcontainer container 6653213b3b8f6e8cfceeb5671c5fcfe6d46751b865b7c751dbc77939e33562d1. Sep 12 17:32:41.537498 containerd[1469]: time="2025-09-12T17:32:41.537440494Z" level=info msg="StartContainer for \"6653213b3b8f6e8cfceeb5671c5fcfe6d46751b865b7c751dbc77939e33562d1\" returns successfully" Sep 12 17:32:42.421179 kubelet[2562]: I0912 17:32:42.420959 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-797c87987f-628v2" podStartSLOduration=36.90720777 podStartE2EDuration="52.420907969s" podCreationTimestamp="2025-09-12 17:31:50 +0000 UTC" firstStartedPulling="2025-09-12 17:32:25.830449548 +0000 UTC m=+52.044737181" lastFinishedPulling="2025-09-12 17:32:41.344149747 +0000 UTC m=+67.558437380" observedRunningTime="2025-09-12 17:32:42.420154401 +0000 UTC m=+68.634442034" watchObservedRunningTime="2025-09-12 17:32:42.420907969 +0000 UTC m=+68.635195602" Sep 12 17:32:43.352242 kubelet[2562]: I0912 17:32:43.352169 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:32:43.352242 kubelet[2562]: I0912 17:32:43.352200 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:32:44.057670 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:43036.service - OpenSSH per-connection server daemon (10.0.0.1:43036). Sep 12 17:32:44.104098 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 43036 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:32:44.107101 sshd[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:44.112265 systemd-logind[1448]: New session 15 of user core. Sep 12 17:32:44.118386 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:32:44.131019 containerd[1469]: time="2025-09-12T17:32:44.130900721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:44.156226 containerd[1469]: time="2025-09-12T17:32:44.156143589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:32:44.498714 kubelet[2562]: I0912 17:32:44.498650 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:32:44.890272 sshd[5624]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:44.907891 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:43036.service: Deactivated successfully. Sep 12 17:32:44.910398 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:32:44.912707 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:32:44.914356 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:43044.service - OpenSSH per-connection server daemon (10.0.0.1:43044). Sep 12 17:32:44.915506 systemd-logind[1448]: Removed session 15. Sep 12 17:32:44.942428 containerd[1469]: time="2025-09-12T17:32:44.942307064Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:44.956041 sshd[5662]: Accepted publickey for core from 10.0.0.1 port 43044 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:32:44.958188 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:44.962946 systemd-logind[1448]: New session 16 of user core. Sep 12 17:32:44.970446 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:32:45.079882 containerd[1469]: time="2025-09-12T17:32:45.079806757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:45.081273 containerd[1469]: time="2025-09-12T17:32:45.081240342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.736627628s" Sep 12 17:32:45.081345 containerd[1469]: time="2025-09-12T17:32:45.081278915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:32:45.082965 containerd[1469]: time="2025-09-12T17:32:45.082932157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:32:45.274442 containerd[1469]: time="2025-09-12T17:32:45.274389132Z" level=info msg="CreateContainer within sandbox \"a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:32:45.342569 sshd[5662]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:45.350366 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:43044.service: Deactivated successfully. Sep 12 17:32:45.352933 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:32:45.354943 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:32:45.365262 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:43046.service - OpenSSH per-connection server daemon (10.0.0.1:43046). Sep 12 17:32:45.366560 systemd-logind[1448]: Removed session 16. Sep 12 17:32:45.399733 sshd[5675]: Accepted publickey for core from 10.0.0.1 port 43046 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:32:45.400900 sshd[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:45.411427 systemd-logind[1448]: New session 17 of user core. Sep 12 17:32:45.419367 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:32:45.726575 sshd[5675]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:45.738529 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:43046.service: Deactivated successfully. Sep 12 17:32:45.740633 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:32:45.741648 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:32:45.742908 systemd-logind[1448]: Removed session 17. Sep 12 17:32:45.861975 containerd[1469]: time="2025-09-12T17:32:45.861902086Z" level=info msg="CreateContainer within sandbox \"a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"727684f2d703db1c55f5a892d6202327bca50ae89fc76d9d2fb3ed26940fed38\"" Sep 12 17:32:45.862534 containerd[1469]: time="2025-09-12T17:32:45.862512291Z" level=info msg="StartContainer for \"727684f2d703db1c55f5a892d6202327bca50ae89fc76d9d2fb3ed26940fed38\"" Sep 12 17:32:45.898449 systemd[1]: Started cri-containerd-727684f2d703db1c55f5a892d6202327bca50ae89fc76d9d2fb3ed26940fed38.scope - libcontainer container 727684f2d703db1c55f5a892d6202327bca50ae89fc76d9d2fb3ed26940fed38. Sep 12 17:32:46.028610 containerd[1469]: time="2025-09-12T17:32:46.028130029Z" level=info msg="StartContainer for \"727684f2d703db1c55f5a892d6202327bca50ae89fc76d9d2fb3ed26940fed38\" returns successfully" Sep 12 17:32:50.743093 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:49264.service - OpenSSH per-connection server daemon (10.0.0.1:49264). Sep 12 17:32:50.871914 sshd[5742]: Accepted publickey for core from 10.0.0.1 port 49264 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:32:50.876038 sshd[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:50.885615 systemd-logind[1448]: New session 18 of user core. Sep 12 17:32:50.891693 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:32:51.289051 systemd[1]: run-containerd-runc-k8s.io-f3640aeb574b91fde429a3823686ddf913a0b252740934f4fff92f46f5730fc1-runc.wkKSFS.mount: Deactivated successfully. Sep 12 17:32:51.426175 update_engine[1451]: I20250912 17:32:51.425967 1451 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 12 17:32:51.426175 update_engine[1451]: I20250912 17:32:51.426057 1451 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 12 17:32:51.427458 update_engine[1451]: I20250912 17:32:51.427416 1451 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 12 17:32:51.428121 update_engine[1451]: I20250912 17:32:51.428093 1451 omaha_request_params.cc:62] Current group set to lts Sep 12 17:32:51.428315 update_engine[1451]: I20250912 17:32:51.428288 1451 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 12 17:32:51.428315 update_engine[1451]: I20250912 17:32:51.428308 1451 update_attempter.cc:643] Scheduling an action processor start. Sep 12 17:32:51.428417 update_engine[1451]: I20250912 17:32:51.428337 1451 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:32:51.428417 update_engine[1451]: I20250912 17:32:51.428398 1451 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 12 17:32:51.428511 update_engine[1451]: I20250912 17:32:51.428483 1451 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:32:51.428511 update_engine[1451]: I20250912 17:32:51.428503 1451 omaha_request_action.cc:272] Request: Sep 12 17:32:51.428511 update_engine[1451]: Sep 12 17:32:51.428511 update_engine[1451]: Sep 12 17:32:51.428511 update_engine[1451]: Sep 12 17:32:51.428511 update_engine[1451]: Sep 12 17:32:51.428511 update_engine[1451]: Sep 12 17:32:51.428511 update_engine[1451]: Sep 12 17:32:51.428511 update_engine[1451]: Sep 12 17:32:51.428511 update_engine[1451]: Sep 12 17:32:51.438831 update_engine[1451]: I20250912 17:32:51.428514 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:32:51.438831 update_engine[1451]: I20250912 17:32:51.436707 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:32:51.438831 update_engine[1451]: I20250912 17:32:51.437088 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:32:51.438917 locksmithd[1478]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 12 17:32:51.451555 update_engine[1451]: E20250912 17:32:51.451444 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:32:51.451705 update_engine[1451]: I20250912 17:32:51.451582 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 12 17:32:51.465463 sshd[5742]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:51.471245 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:49264.service: Deactivated successfully. Sep 12 17:32:51.474181 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:32:51.475346 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:32:51.477186 systemd-logind[1448]: Removed session 18. Sep 12 17:32:51.561719 containerd[1469]: time="2025-09-12T17:32:51.561539938Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:51.635459 containerd[1469]: time="2025-09-12T17:32:51.635348777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:32:51.788601 containerd[1469]: time="2025-09-12T17:32:51.782609459Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:51.818741 containerd[1469]: time="2025-09-12T17:32:51.818416661Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:51.819096 containerd[1469]: time="2025-09-12T17:32:51.819054178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 6.736092686s" Sep 12 17:32:51.819173 containerd[1469]: time="2025-09-12T17:32:51.819096969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:32:51.820537 containerd[1469]: time="2025-09-12T17:32:51.820473134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:32:52.253733 containerd[1469]: time="2025-09-12T17:32:52.253680630Z" level=info msg="CreateContainer within sandbox \"6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:32:53.259841 containerd[1469]: time="2025-09-12T17:32:53.259770687Z" level=info msg="CreateContainer within sandbox \"6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ccda99bece027afbaee78080258b527f8a932df9365cc2e5e73c4db37706aa67\"" Sep 12 17:32:53.260487 containerd[1469]: time="2025-09-12T17:32:53.260464900Z" level=info msg="StartContainer for \"ccda99bece027afbaee78080258b527f8a932df9365cc2e5e73c4db37706aa67\"" Sep 12 17:32:53.332477 systemd[1]: Started cri-containerd-ccda99bece027afbaee78080258b527f8a932df9365cc2e5e73c4db37706aa67.scope - libcontainer container ccda99bece027afbaee78080258b527f8a932df9365cc2e5e73c4db37706aa67. Sep 12 17:32:53.519264 containerd[1469]: time="2025-09-12T17:32:53.518616712Z" level=info msg="StartContainer for \"ccda99bece027afbaee78080258b527f8a932df9365cc2e5e73c4db37706aa67\" returns successfully" Sep 12 17:32:55.895101 kubelet[2562]: E0912 17:32:55.895054 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:56.476389 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:49276.service - OpenSSH per-connection server daemon (10.0.0.1:49276). Sep 12 17:32:56.682494 sshd[5852]: Accepted publickey for core from 10.0.0.1 port 49276 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:32:56.684524 sshd[5852]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:56.690583 systemd-logind[1448]: New session 19 of user core. Sep 12 17:32:56.697533 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:32:56.893401 kubelet[2562]: E0912 17:32:56.892458 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:56.918733 sshd[5852]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:56.928984 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:49276.service: Deactivated successfully. Sep 12 17:32:56.929363 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:32:56.931996 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:32:56.934031 systemd-logind[1448]: Removed session 19. Sep 12 17:32:58.181206 containerd[1469]: time="2025-09-12T17:32:58.181106443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:58.214858 containerd[1469]: time="2025-09-12T17:32:58.214733890Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:32:58.275197 containerd[1469]: time="2025-09-12T17:32:58.275098777Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:58.354586 containerd[1469]: time="2025-09-12T17:32:58.354522368Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:58.355342 containerd[1469]: time="2025-09-12T17:32:58.355297342Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 6.534770919s" Sep 12 17:32:58.355342 containerd[1469]: time="2025-09-12T17:32:58.355332059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:32:58.356432 containerd[1469]: time="2025-09-12T17:32:58.356244875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:32:58.541172 containerd[1469]: time="2025-09-12T17:32:58.541123531Z" level=info msg="CreateContainer within sandbox \"35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:32:58.883518 containerd[1469]: time="2025-09-12T17:32:58.883356165Z" level=info msg="CreateContainer within sandbox \"35acdf7b4f513beae34f8ec7cee0c1862786726b14919695c7c3bbcfed7e5914\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"48ae9af22234ed895475aea322c913b997bfcbb1109fdb5cdba8865e683937b1\"" Sep 12 17:32:58.884148 containerd[1469]: time="2025-09-12T17:32:58.884111945Z" level=info msg="StartContainer for \"48ae9af22234ed895475aea322c913b997bfcbb1109fdb5cdba8865e683937b1\"" Sep 12 17:32:58.932431 systemd[1]: Started cri-containerd-48ae9af22234ed895475aea322c913b997bfcbb1109fdb5cdba8865e683937b1.scope - libcontainer container 48ae9af22234ed895475aea322c913b997bfcbb1109fdb5cdba8865e683937b1. Sep 12 17:32:59.078361 containerd[1469]: time="2025-09-12T17:32:59.078305465Z" level=info msg="StartContainer for \"48ae9af22234ed895475aea322c913b997bfcbb1109fdb5cdba8865e683937b1\" returns successfully" Sep 12 17:32:59.662577 kubelet[2562]: I0912 17:32:59.662483 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7d58b7c7df-c2h2g" podStartSLOduration=34.267295495 podStartE2EDuration="1m5.659315355s" podCreationTimestamp="2025-09-12 17:31:54 +0000 UTC" firstStartedPulling="2025-09-12 17:32:26.96405339 +0000 UTC m=+53.178341023" lastFinishedPulling="2025-09-12 17:32:58.35607324 +0000 UTC m=+84.570360883" observedRunningTime="2025-09-12 17:32:59.605309733 +0000 UTC m=+85.819597366" watchObservedRunningTime="2025-09-12 17:32:59.659315355 +0000 UTC m=+85.873602988" Sep 12 17:33:01.366362 update_engine[1451]: I20250912 17:33:01.366250 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:33:01.366916 update_engine[1451]: I20250912 17:33:01.366658 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:33:01.366968 update_engine[1451]: I20250912 17:33:01.366942 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:33:01.374846 update_engine[1451]: E20250912 17:33:01.374754 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:33:01.374846 update_engine[1451]: I20250912 17:33:01.374853 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 12 17:33:01.937454 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:51444.service - OpenSSH per-connection server daemon (10.0.0.1:51444). Sep 12 17:33:02.069915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1550758342.mount: Deactivated successfully. Sep 12 17:33:02.070722 sshd[5965]: Accepted publickey for core from 10.0.0.1 port 51444 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:33:02.072737 sshd[5965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:02.077281 systemd-logind[1448]: New session 20 of user core. Sep 12 17:33:02.085461 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:33:02.516316 sshd[5965]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:02.521908 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:51444.service: Deactivated successfully. Sep 12 17:33:02.524839 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:33:02.525649 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:33:02.527282 systemd-logind[1448]: Removed session 20. Sep 12 17:33:02.927887 containerd[1469]: time="2025-09-12T17:33:02.927733516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:02.933170 containerd[1469]: time="2025-09-12T17:33:02.933112450Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:33:02.945146 containerd[1469]: time="2025-09-12T17:33:02.945095946Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:02.960105 containerd[1469]: time="2025-09-12T17:33:02.960009329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:02.961030 containerd[1469]: time="2025-09-12T17:33:02.960991245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.604711114s" Sep 12 17:33:02.961094 containerd[1469]: time="2025-09-12T17:33:02.961030098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:33:02.964587 containerd[1469]: time="2025-09-12T17:33:02.964547686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:33:02.975878 containerd[1469]: time="2025-09-12T17:33:02.975842891Z" level=info msg="CreateContainer within sandbox \"a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:33:03.152108 containerd[1469]: time="2025-09-12T17:33:03.152023058Z" level=info msg="CreateContainer within sandbox \"a47df5d097648a65e7b8730b79ad2698cc14558047b08a359b3b048604e4a84a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"8d96e729ad9320cb7502bd0b1af259cfee76e13b360fef3615de6b2559ef53be\"" Sep 12 17:33:03.152769 containerd[1469]: time="2025-09-12T17:33:03.152732088Z" level=info msg="StartContainer for \"8d96e729ad9320cb7502bd0b1af259cfee76e13b360fef3615de6b2559ef53be\"" Sep 12 17:33:03.199393 systemd[1]: Started cri-containerd-8d96e729ad9320cb7502bd0b1af259cfee76e13b360fef3615de6b2559ef53be.scope - libcontainer container 8d96e729ad9320cb7502bd0b1af259cfee76e13b360fef3615de6b2559ef53be. Sep 12 17:33:03.278093 containerd[1469]: time="2025-09-12T17:33:03.278045294Z" level=info msg="StartContainer for \"8d96e729ad9320cb7502bd0b1af259cfee76e13b360fef3615de6b2559ef53be\" returns successfully" Sep 12 17:33:03.590442 kubelet[2562]: I0912 17:33:03.590164 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b89c964f6-g5w2p" podStartSLOduration=4.083914559 podStartE2EDuration="40.590137982s" podCreationTimestamp="2025-09-12 17:32:23 +0000 UTC" firstStartedPulling="2025-09-12 17:32:26.455653906 +0000 UTC m=+52.669941539" lastFinishedPulling="2025-09-12 17:33:02.961877329 +0000 UTC m=+89.176164962" observedRunningTime="2025-09-12 17:33:03.590038935 +0000 UTC m=+89.804326568" watchObservedRunningTime="2025-09-12 17:33:03.590137982 +0000 UTC m=+89.804425625" Sep 12 17:33:05.892623 kubelet[2562]: E0912 17:33:05.892570 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:05.996586 containerd[1469]: time="2025-09-12T17:33:05.996509308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:06.017501 containerd[1469]: time="2025-09-12T17:33:06.017441520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:33:06.051485 containerd[1469]: time="2025-09-12T17:33:06.051391812Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:06.085148 containerd[1469]: time="2025-09-12T17:33:06.085083716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:33:06.085808 containerd[1469]: time="2025-09-12T17:33:06.085764652Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.121174336s" Sep 12 17:33:06.085896 containerd[1469]: time="2025-09-12T17:33:06.085809246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:33:06.178884 containerd[1469]: time="2025-09-12T17:33:06.178730314Z" level=info msg="CreateContainer within sandbox \"6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:33:06.352824 systemd[1]: run-containerd-runc-k8s.io-fdfb99f2b802cbf4c733286e7e2c028cedce8491d2134c894c9067109d22d9d9-runc.l1gxpF.mount: Deactivated successfully. Sep 12 17:33:06.485187 containerd[1469]: time="2025-09-12T17:33:06.485135312Z" level=info msg="CreateContainer within sandbox \"6ed92b2aaa9208e01c0a708b1bfa16adf7725a46105653d2c19b6811f98090f2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"321f8779d2cefbc9525a2f44e9e37de98599d00ff0e2789fdf4dac5b5dc9aef9\"" Sep 12 17:33:06.485797 containerd[1469]: time="2025-09-12T17:33:06.485770842Z" level=info msg="StartContainer for \"321f8779d2cefbc9525a2f44e9e37de98599d00ff0e2789fdf4dac5b5dc9aef9\"" Sep 12 17:33:06.534384 systemd[1]: Started cri-containerd-321f8779d2cefbc9525a2f44e9e37de98599d00ff0e2789fdf4dac5b5dc9aef9.scope - libcontainer container 321f8779d2cefbc9525a2f44e9e37de98599d00ff0e2789fdf4dac5b5dc9aef9. Sep 12 17:33:06.855635 containerd[1469]: time="2025-09-12T17:33:06.855498105Z" level=info msg="StartContainer for \"321f8779d2cefbc9525a2f44e9e37de98599d00ff0e2789fdf4dac5b5dc9aef9\" returns successfully" Sep 12 17:33:07.240455 kubelet[2562]: I0912 17:33:07.240401 2562 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:33:07.245465 kubelet[2562]: I0912 17:33:07.245372 2562 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:33:07.534414 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:51456.service - OpenSSH per-connection server daemon (10.0.0.1:51456). Sep 12 17:33:07.607581 sshd[6098]: Accepted publickey for core from 10.0.0.1 port 51456 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:33:07.610316 sshd[6098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:07.615373 systemd-logind[1448]: New session 21 of user core. Sep 12 17:33:07.626456 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:33:07.910182 sshd[6098]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:07.915162 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:51456.service: Deactivated successfully. Sep 12 17:33:07.917962 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:33:07.919235 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:33:07.920353 systemd-logind[1448]: Removed session 21. Sep 12 17:33:11.366993 update_engine[1451]: I20250912 17:33:11.366873 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:33:11.367584 update_engine[1451]: I20250912 17:33:11.367290 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:33:11.367584 update_engine[1451]: I20250912 17:33:11.367533 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:33:11.377649 update_engine[1451]: E20250912 17:33:11.377586 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:33:11.377649 update_engine[1451]: I20250912 17:33:11.377647 1451 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Sep 12 17:33:12.921866 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:49500.service - OpenSSH per-connection server daemon (10.0.0.1:49500). Sep 12 17:33:12.973236 sshd[6116]: Accepted publickey for core from 10.0.0.1 port 49500 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:33:12.974914 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:12.979023 systemd-logind[1448]: New session 22 of user core. Sep 12 17:33:12.993360 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:33:13.160906 sshd[6116]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:13.176685 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:49500.service: Deactivated successfully. Sep 12 17:33:13.179609 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:33:13.182237 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:33:13.190707 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:49512.service - OpenSSH per-connection server daemon (10.0.0.1:49512). Sep 12 17:33:13.191843 systemd-logind[1448]: Removed session 22. Sep 12 17:33:13.233005 sshd[6131]: Accepted publickey for core from 10.0.0.1 port 49512 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:33:13.234840 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:13.240067 systemd-logind[1448]: New session 23 of user core. Sep 12 17:33:13.249376 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:33:13.587810 sshd[6131]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:13.604353 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:49512.service: Deactivated successfully. Sep 12 17:33:13.607675 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:33:13.610229 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:33:13.612151 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:49518.service - OpenSSH per-connection server daemon (10.0.0.1:49518). Sep 12 17:33:13.613970 systemd-logind[1448]: Removed session 23. Sep 12 17:33:13.666633 sshd[6145]: Accepted publickey for core from 10.0.0.1 port 49518 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:33:13.668563 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:13.673066 systemd-logind[1448]: New session 24 of user core. Sep 12 17:33:13.682378 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:33:14.567054 sshd[6145]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:14.579837 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:49518.service: Deactivated successfully. Sep 12 17:33:14.583133 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:33:14.585690 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:33:14.592530 systemd[1]: Started sshd@24-10.0.0.50:22-10.0.0.1:49528.service - OpenSSH per-connection server daemon (10.0.0.1:49528). Sep 12 17:33:14.594557 systemd-logind[1448]: Removed session 24. Sep 12 17:33:14.632082 sshd[6171]: Accepted publickey for core from 10.0.0.1 port 49528 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:33:14.634177 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:14.640179 systemd-logind[1448]: New session 25 of user core. Sep 12 17:33:14.645412 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:33:15.140528 sshd[6171]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:15.151633 systemd[1]: sshd@24-10.0.0.50:22-10.0.0.1:49528.service: Deactivated successfully. Sep 12 17:33:15.154764 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:33:15.157939 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:33:15.164963 systemd[1]: Started sshd@25-10.0.0.50:22-10.0.0.1:49542.service - OpenSSH per-connection server daemon (10.0.0.1:49542). Sep 12 17:33:15.167722 systemd-logind[1448]: Removed session 25. Sep 12 17:33:15.220691 sshd[6184]: Accepted publickey for core from 10.0.0.1 port 49542 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:33:15.222551 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:15.226997 systemd-logind[1448]: New session 26 of user core. Sep 12 17:33:15.237399 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 17:33:15.377199 sshd[6184]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:15.382671 systemd[1]: sshd@25-10.0.0.50:22-10.0.0.1:49542.service: Deactivated successfully. Sep 12 17:33:15.385305 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 17:33:15.386268 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Sep 12 17:33:15.387391 systemd-logind[1448]: Removed session 26. Sep 12 17:33:15.893243 kubelet[2562]: E0912 17:33:15.893161 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:20.395876 systemd[1]: Started sshd@26-10.0.0.50:22-10.0.0.1:59502.service - OpenSSH per-connection server daemon (10.0.0.1:59502). Sep 12 17:33:20.462042 sshd[6200]: Accepted publickey for core from 10.0.0.1 port 59502 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:33:20.467937 sshd[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:20.473606 systemd-logind[1448]: New session 27 of user core. Sep 12 17:33:20.483495 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 17:33:20.623514 sshd[6200]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:20.629314 systemd[1]: sshd@26-10.0.0.50:22-10.0.0.1:59502.service: Deactivated successfully. Sep 12 17:33:20.637075 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 17:33:20.637973 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Sep 12 17:33:20.639021 systemd-logind[1448]: Removed session 27. Sep 12 17:33:21.364803 update_engine[1451]: I20250912 17:33:21.364670 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:33:21.365505 update_engine[1451]: I20250912 17:33:21.365097 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:33:21.365505 update_engine[1451]: I20250912 17:33:21.365409 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:33:21.375021 update_engine[1451]: E20250912 17:33:21.374950 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:33:21.375151 update_engine[1451]: I20250912 17:33:21.375038 1451 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 17:33:21.376773 update_engine[1451]: I20250912 17:33:21.376683 1451 omaha_request_action.cc:617] Omaha request response: Sep 12 17:33:21.376928 update_engine[1451]: E20250912 17:33:21.376876 1451 omaha_request_action.cc:636] Omaha request network transfer failed. Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.378587 1451 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.378624 1451 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.378633 1451 update_attempter.cc:306] Processing Done. Sep 12 17:33:21.379537 update_engine[1451]: E20250912 17:33:21.378664 1451 update_attempter.cc:619] Update failed. Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.378685 1451 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.378695 1451 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.378705 1451 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.378800 1451 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.378829 1451 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.378839 1451 omaha_request_action.cc:272] Request: Sep 12 17:33:21.379537 update_engine[1451]: Sep 12 17:33:21.379537 update_engine[1451]: Sep 12 17:33:21.379537 update_engine[1451]: Sep 12 17:33:21.379537 update_engine[1451]: Sep 12 17:33:21.379537 update_engine[1451]: Sep 12 17:33:21.379537 update_engine[1451]: Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.378850 1451 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.379192 1451 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 17:33:21.379537 update_engine[1451]: I20250912 17:33:21.379498 1451 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 17:33:21.382825 locksmithd[1478]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Sep 12 17:33:21.387184 update_engine[1451]: E20250912 17:33:21.387136 1451 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 17:33:21.387266 update_engine[1451]: I20250912 17:33:21.387195 1451 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Sep 12 17:33:21.387266 update_engine[1451]: I20250912 17:33:21.387228 1451 omaha_request_action.cc:617] Omaha request response: Sep 12 17:33:21.387266 update_engine[1451]: I20250912 17:33:21.387241 1451 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:33:21.387266 update_engine[1451]: I20250912 17:33:21.387250 1451 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Sep 12 17:33:21.387266 update_engine[1451]: I20250912 17:33:21.387259 1451 update_attempter.cc:306] Processing Done. Sep 12 17:33:21.387418 update_engine[1451]: I20250912 17:33:21.387267 1451 update_attempter.cc:310] Error event sent. Sep 12 17:33:21.387418 update_engine[1451]: I20250912 17:33:21.387288 1451 update_check_scheduler.cc:74] Next update check in 45m3s Sep 12 17:33:21.387624 locksmithd[1478]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Sep 12 17:33:25.644190 systemd[1]: Started sshd@27-10.0.0.50:22-10.0.0.1:59512.service - OpenSSH per-connection server daemon (10.0.0.1:59512). Sep 12 17:33:25.691357 sshd[6238]: Accepted publickey for core from 10.0.0.1 port 59512 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:33:25.693523 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:25.699135 systemd-logind[1448]: New session 28 of user core. Sep 12 17:33:25.714604 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 12 17:33:25.925505 sshd[6238]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:25.929610 systemd[1]: sshd@27-10.0.0.50:22-10.0.0.1:59512.service: Deactivated successfully. Sep 12 17:33:25.931902 systemd[1]: session-28.scope: Deactivated successfully. Sep 12 17:33:25.932667 systemd-logind[1448]: Session 28 logged out. Waiting for processes to exit. Sep 12 17:33:25.933549 systemd-logind[1448]: Removed session 28. Sep 12 17:33:30.951520 systemd[1]: Started sshd@28-10.0.0.50:22-10.0.0.1:37534.service - OpenSSH per-connection server daemon (10.0.0.1:37534). Sep 12 17:33:30.984018 sshd[6273]: Accepted publickey for core from 10.0.0.1 port 37534 ssh2: RSA SHA256:/3iCKt022AmlyeTJlfGC2G4n7imjC85tKhX8vqxRig4 Sep 12 17:33:30.985738 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:30.989985 systemd-logind[1448]: New session 29 of user core. Sep 12 17:33:31.001456 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 12 17:33:31.147750 sshd[6273]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:31.152196 systemd[1]: sshd@28-10.0.0.50:22-10.0.0.1:37534.service: Deactivated successfully. Sep 12 17:33:31.154825 systemd[1]: session-29.scope: Deactivated successfully. Sep 12 17:33:31.155493 systemd-logind[1448]: Session 29 logged out. Waiting for processes to exit. Sep 12 17:33:31.156669 systemd-logind[1448]: Removed session 29.