Sep 10 00:36:55.982734 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 22:56:44 -00 2025 Sep 10 00:36:55.982759 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:36:55.982770 kernel: BIOS-provided physical RAM map: Sep 10 00:36:55.982777 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 10 00:36:55.982783 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 10 00:36:55.982789 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 10 00:36:55.982797 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 10 00:36:55.982803 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 10 00:36:55.982810 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 10 00:36:55.982816 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 10 00:36:55.982825 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 10 00:36:55.982832 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 10 00:36:55.982841 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 10 00:36:55.982848 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 10 00:36:55.982858 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 10 00:36:55.982865 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 10 00:36:55.982875 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 10 00:36:55.982882 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 10 00:36:55.982889 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 10 00:36:55.982897 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:36:55.982905 kernel: NX (Execute Disable) protection: active Sep 10 00:36:55.982913 kernel: APIC: Static calls initialized Sep 10 00:36:55.982920 kernel: efi: EFI v2.7 by EDK II Sep 10 00:36:55.982927 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 10 00:36:55.982936 kernel: SMBIOS 2.8 present. Sep 10 00:36:55.982944 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 10 00:36:55.982952 kernel: Hypervisor detected: KVM Sep 10 00:36:55.982963 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 00:36:55.982970 kernel: kvm-clock: using sched offset of 5198275886 cycles Sep 10 00:36:55.982977 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 00:36:55.982985 kernel: tsc: Detected 2794.748 MHz processor Sep 10 00:36:55.982992 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 00:36:55.983000 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 00:36:55.983007 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 10 00:36:55.983014 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 10 00:36:55.983022 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 00:36:55.983032 kernel: Using GB pages for direct mapping Sep 10 00:36:55.983039 kernel: Secure boot disabled Sep 10 00:36:55.983046 kernel: ACPI: Early table checksum verification disabled Sep 10 00:36:55.983053 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 10 00:36:55.983075 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 10 00:36:55.983083 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:55.983091 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:55.983101 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 10 00:36:55.983108 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:55.983143 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:55.983151 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:55.983158 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:36:55.983166 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 10 00:36:55.983173 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 10 00:36:55.983184 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 10 00:36:55.983192 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 10 00:36:55.983199 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 10 00:36:55.983207 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 10 00:36:55.983214 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 10 00:36:55.983221 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 10 00:36:55.983229 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 10 00:36:55.983236 kernel: No NUMA configuration found Sep 10 00:36:55.983247 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 10 00:36:55.983257 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 10 00:36:55.983265 kernel: Zone ranges: Sep 10 00:36:55.983272 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 00:36:55.983280 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 10 00:36:55.983287 kernel: Normal empty Sep 10 00:36:55.983294 kernel: Movable zone start for each node Sep 10 00:36:55.983302 kernel: Early memory node ranges Sep 10 00:36:55.983309 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 10 00:36:55.983316 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 10 00:36:55.983324 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 10 00:36:55.983334 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 10 00:36:55.983341 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 10 00:36:55.983349 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 10 00:36:55.983358 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 10 00:36:55.983366 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:36:55.983373 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 10 00:36:55.983381 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 10 00:36:55.983388 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:36:55.983395 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 10 00:36:55.983405 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 10 00:36:55.983413 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 10 00:36:55.983421 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 00:36:55.983428 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 00:36:55.983436 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 00:36:55.983443 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 00:36:55.983451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 00:36:55.983458 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 00:36:55.983465 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 00:36:55.983475 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 00:36:55.983483 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 00:36:55.983490 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 00:36:55.983498 kernel: TSC deadline timer available Sep 10 00:36:55.983505 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 10 00:36:55.983513 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 10 00:36:55.983520 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 00:36:55.983527 kernel: kvm-guest: setup PV sched yield Sep 10 00:36:55.983535 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 10 00:36:55.983545 kernel: Booting paravirtualized kernel on KVM Sep 10 00:36:55.983555 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 00:36:55.983563 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 10 00:36:55.983570 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 10 00:36:55.983578 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 10 00:36:55.983585 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 00:36:55.983592 kernel: kvm-guest: PV spinlocks enabled Sep 10 00:36:55.983600 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 00:36:55.983608 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:36:55.983621 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:36:55.983629 kernel: random: crng init done Sep 10 00:36:55.983636 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:36:55.983644 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:36:55.983651 kernel: Fallback order for Node 0: 0 Sep 10 00:36:55.983659 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 10 00:36:55.983666 kernel: Policy zone: DMA32 Sep 10 00:36:55.983674 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:36:55.983681 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 166140K reserved, 0K cma-reserved) Sep 10 00:36:55.983692 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:36:55.983699 kernel: ftrace: allocating 37969 entries in 149 pages Sep 10 00:36:55.983707 kernel: ftrace: allocated 149 pages with 4 groups Sep 10 00:36:55.983714 kernel: Dynamic Preempt: voluntary Sep 10 00:36:55.983733 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 00:36:55.983744 kernel: rcu: RCU event tracing is enabled. Sep 10 00:36:55.983752 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:36:55.983760 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 00:36:55.983768 kernel: Rude variant of Tasks RCU enabled. Sep 10 00:36:55.983776 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:36:55.983786 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:36:55.983799 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:36:55.983809 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 00:36:55.983821 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 00:36:55.983831 kernel: Console: colour dummy device 80x25 Sep 10 00:36:55.983841 kernel: printk: console [ttyS0] enabled Sep 10 00:36:55.983854 kernel: ACPI: Core revision 20230628 Sep 10 00:36:55.983864 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 00:36:55.983873 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 00:36:55.983883 kernel: x2apic enabled Sep 10 00:36:55.983893 kernel: APIC: Switched APIC routing to: physical x2apic Sep 10 00:36:55.983902 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 10 00:36:55.983912 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 10 00:36:55.983922 kernel: kvm-guest: setup PV IPIs Sep 10 00:36:55.983930 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 00:36:55.983941 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 10 00:36:55.983949 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 10 00:36:55.983956 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 00:36:55.983964 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 00:36:55.983972 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 00:36:55.983980 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 00:36:55.983987 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 00:36:55.983995 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 00:36:55.984003 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 00:36:55.984013 kernel: active return thunk: retbleed_return_thunk Sep 10 00:36:55.984021 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 00:36:55.984029 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 00:36:55.984037 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 10 00:36:55.984047 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 10 00:36:55.984055 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 10 00:36:55.984069 kernel: active return thunk: srso_return_thunk Sep 10 00:36:55.984077 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 10 00:36:55.984088 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 00:36:55.984096 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 00:36:55.984104 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 00:36:55.984111 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 00:36:55.984153 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 10 00:36:55.984161 kernel: Freeing SMP alternatives memory: 32K Sep 10 00:36:55.984169 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:36:55.984177 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 10 00:36:55.984184 kernel: landlock: Up and running. Sep 10 00:36:55.984195 kernel: SELinux: Initializing. Sep 10 00:36:55.984203 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:36:55.984211 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:36:55.984219 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 00:36:55.984227 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:36:55.984235 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:36:55.984243 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:36:55.984251 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 00:36:55.984258 kernel: ... version: 0 Sep 10 00:36:55.984269 kernel: ... bit width: 48 Sep 10 00:36:55.984276 kernel: ... generic registers: 6 Sep 10 00:36:55.984284 kernel: ... value mask: 0000ffffffffffff Sep 10 00:36:55.984292 kernel: ... max period: 00007fffffffffff Sep 10 00:36:55.984300 kernel: ... fixed-purpose events: 0 Sep 10 00:36:55.984307 kernel: ... event mask: 000000000000003f Sep 10 00:36:55.984315 kernel: signal: max sigframe size: 1776 Sep 10 00:36:55.984323 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:36:55.984330 kernel: rcu: Max phase no-delay instances is 400. Sep 10 00:36:55.984341 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:36:55.984348 kernel: smpboot: x86: Booting SMP configuration: Sep 10 00:36:55.984356 kernel: .... node #0, CPUs: #1 #2 #3 Sep 10 00:36:55.984364 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:36:55.984375 kernel: smpboot: Max logical packages: 1 Sep 10 00:36:55.984383 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 10 00:36:55.984390 kernel: devtmpfs: initialized Sep 10 00:36:55.984398 kernel: x86/mm: Memory block size: 128MB Sep 10 00:36:55.984406 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 10 00:36:55.984414 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 10 00:36:55.984424 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 10 00:36:55.984432 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 10 00:36:55.984440 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 10 00:36:55.984448 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:36:55.984456 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:36:55.984464 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:36:55.984471 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:36:55.984479 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:36:55.984490 kernel: audit: type=2000 audit(1757464614.527:1): state=initialized audit_enabled=0 res=1 Sep 10 00:36:55.984497 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:36:55.984505 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 00:36:55.984513 kernel: cpuidle: using governor menu Sep 10 00:36:55.984521 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:36:55.984528 kernel: dca service started, version 1.12.1 Sep 10 00:36:55.984536 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 10 00:36:55.984544 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 10 00:36:55.984552 kernel: PCI: Using configuration type 1 for base access Sep 10 00:36:55.984562 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 00:36:55.984570 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:36:55.984586 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 00:36:55.984593 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:36:55.984601 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 00:36:55.984609 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:36:55.984617 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:36:55.984624 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:36:55.984632 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:36:55.984644 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 10 00:36:55.984654 kernel: ACPI: Interpreter enabled Sep 10 00:36:55.984662 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 00:36:55.984669 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 00:36:55.984677 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 00:36:55.984685 kernel: PCI: Using E820 reservations for host bridge windows Sep 10 00:36:55.984693 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 00:36:55.984701 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:36:55.984917 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:36:55.985073 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 00:36:55.985266 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 00:36:55.985278 kernel: PCI host bridge to bus 0000:00 Sep 10 00:36:55.985431 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 00:36:55.985552 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 00:36:55.985670 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 00:36:55.985794 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 10 00:36:55.985961 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 00:36:55.986187 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 10 00:36:55.986309 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:36:55.986489 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 10 00:36:55.986648 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 10 00:36:55.986792 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 10 00:36:55.986935 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 10 00:36:55.987060 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 10 00:36:55.987217 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 10 00:36:55.987349 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 00:36:55.987494 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:36:55.987622 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 10 00:36:55.987754 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 10 00:36:55.987905 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 10 00:36:55.988093 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 10 00:36:55.988243 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 10 00:36:55.988371 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 10 00:36:55.988501 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 10 00:36:55.988756 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 10 00:36:55.988980 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 10 00:36:55.989131 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 10 00:36:55.989263 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 10 00:36:55.989390 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 10 00:36:55.989533 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 10 00:36:55.989661 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 00:36:55.989809 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 10 00:36:55.989961 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 10 00:36:55.990176 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 10 00:36:55.990438 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 10 00:36:55.990596 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 10 00:36:55.990609 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 00:36:55.990617 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 00:36:55.990625 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 00:36:55.990638 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 00:36:55.990646 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 00:36:55.990654 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 00:36:55.990662 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 00:36:55.990669 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 00:36:55.990677 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 00:36:55.990685 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 00:36:55.990693 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 00:36:55.990700 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 00:36:55.990711 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 00:36:55.990719 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 00:36:55.990726 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 00:36:55.990734 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 00:36:55.990742 kernel: iommu: Default domain type: Translated Sep 10 00:36:55.990750 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 00:36:55.990758 kernel: efivars: Registered efivars operations Sep 10 00:36:55.990766 kernel: PCI: Using ACPI for IRQ routing Sep 10 00:36:55.990773 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 00:36:55.990781 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 10 00:36:55.990792 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 10 00:36:55.990799 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 10 00:36:55.990807 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 10 00:36:55.990952 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 00:36:55.991183 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 00:36:55.991316 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 00:36:55.991327 kernel: vgaarb: loaded Sep 10 00:36:55.991335 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 00:36:55.991351 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 00:36:55.991359 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 00:36:55.991367 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:36:55.991375 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:36:55.991383 kernel: pnp: PnP ACPI init Sep 10 00:36:55.991541 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 00:36:55.991553 kernel: pnp: PnP ACPI: found 6 devices Sep 10 00:36:55.991562 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 00:36:55.991573 kernel: NET: Registered PF_INET protocol family Sep 10 00:36:55.991582 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:36:55.991590 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:36:55.991598 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:36:55.991606 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:36:55.991614 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 00:36:55.991622 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:36:55.991630 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:36:55.991638 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:36:55.991648 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:36:55.991656 kernel: NET: Registered PF_XDP protocol family Sep 10 00:36:55.991839 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 10 00:36:55.992015 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 10 00:36:55.992193 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 00:36:55.992311 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 00:36:55.992425 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 00:36:55.992540 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 10 00:36:55.992672 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 00:36:55.992822 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 10 00:36:55.992844 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:36:55.992853 kernel: Initialise system trusted keyrings Sep 10 00:36:55.992861 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:36:55.992871 kernel: Key type asymmetric registered Sep 10 00:36:55.992894 kernel: Asymmetric key parser 'x509' registered Sep 10 00:36:55.992906 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 10 00:36:55.992929 kernel: io scheduler mq-deadline registered Sep 10 00:36:55.992956 kernel: io scheduler kyber registered Sep 10 00:36:55.992966 kernel: io scheduler bfq registered Sep 10 00:36:55.992976 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 00:36:55.992986 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 00:36:55.993001 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 00:36:55.993015 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 00:36:55.993025 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:36:55.993036 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 00:36:55.993045 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 00:36:55.993060 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 00:36:55.993080 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 00:36:55.993255 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 00:36:55.993270 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 00:36:55.993392 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 00:36:55.993512 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T00:36:55 UTC (1757464615) Sep 10 00:36:55.993641 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 10 00:36:55.993652 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 10 00:36:55.993665 kernel: efifb: probing for efifb Sep 10 00:36:55.993674 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 10 00:36:55.993682 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 10 00:36:55.993690 kernel: efifb: scrolling: redraw Sep 10 00:36:55.993698 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 10 00:36:55.993707 kernel: Console: switching to colour frame buffer device 100x37 Sep 10 00:36:55.993732 kernel: fb0: EFI VGA frame buffer device Sep 10 00:36:55.993743 kernel: pstore: Using crash dump compression: deflate Sep 10 00:36:55.993751 kernel: pstore: Registered efi_pstore as persistent store backend Sep 10 00:36:55.993762 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:36:55.993770 kernel: Segment Routing with IPv6 Sep 10 00:36:55.993778 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:36:55.993786 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:36:55.993795 kernel: Key type dns_resolver registered Sep 10 00:36:55.993803 kernel: IPI shorthand broadcast: enabled Sep 10 00:36:55.993812 kernel: sched_clock: Marking stable (1264004109, 127270765)->(1417730951, -26456077) Sep 10 00:36:55.993822 kernel: registered taskstats version 1 Sep 10 00:36:55.993832 kernel: Loading compiled-in X.509 certificates Sep 10 00:36:55.993846 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: a614f1c62f27a560d677bbf0283703118c9005ec' Sep 10 00:36:55.993856 kernel: Key type .fscrypt registered Sep 10 00:36:55.993866 kernel: Key type fscrypt-provisioning registered Sep 10 00:36:55.993876 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:36:55.993886 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:36:55.993896 kernel: ima: No architecture policies found Sep 10 00:36:55.993906 kernel: clk: Disabling unused clocks Sep 10 00:36:55.993917 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 10 00:36:55.993927 kernel: Write protecting the kernel read-only data: 36864k Sep 10 00:36:55.993940 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 10 00:36:55.993949 kernel: Run /init as init process Sep 10 00:36:55.993957 kernel: with arguments: Sep 10 00:36:55.993965 kernel: /init Sep 10 00:36:55.993973 kernel: with environment: Sep 10 00:36:55.993981 kernel: HOME=/ Sep 10 00:36:55.993989 kernel: TERM=linux Sep 10 00:36:55.993997 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:36:55.994022 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:36:55.994038 systemd[1]: Detected virtualization kvm. Sep 10 00:36:55.995041 systemd[1]: Detected architecture x86-64. Sep 10 00:36:55.995055 systemd[1]: Running in initrd. Sep 10 00:36:55.995083 systemd[1]: No hostname configured, using default hostname. Sep 10 00:36:55.995094 systemd[1]: Hostname set to . Sep 10 00:36:55.995103 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:36:55.995112 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:36:55.995142 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:36:55.995151 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:36:55.995160 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 00:36:55.995169 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:36:55.995181 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 00:36:55.995190 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 00:36:55.995200 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 00:36:55.995209 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 00:36:55.995218 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:36:55.995227 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:36:55.995235 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:36:55.995247 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:36:55.995255 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:36:55.995264 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:36:55.995273 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:36:55.995281 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:36:55.995290 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:36:55.995299 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 10 00:36:55.995311 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:36:55.995320 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:36:55.995331 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:36:55.995340 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:36:55.995349 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 00:36:55.995357 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:36:55.995366 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 00:36:55.995375 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:36:55.995383 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:36:55.995392 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:36:55.995403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:36:55.995412 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 00:36:55.995420 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:36:55.995429 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:36:55.995461 systemd-journald[193]: Collecting audit messages is disabled. Sep 10 00:36:55.995485 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:36:55.995494 systemd-journald[193]: Journal started Sep 10 00:36:55.995516 systemd-journald[193]: Runtime Journal (/run/log/journal/04f7e83b1e03483a858c83f4f66f3651) is 6.0M, max 48.3M, 42.2M free. Sep 10 00:36:55.990517 systemd-modules-load[194]: Inserted module 'overlay' Sep 10 00:36:55.997588 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:36:55.999509 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:36:56.002398 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:36:56.009241 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:36:56.014586 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:36:56.018197 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:36:56.031437 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:36:56.035812 kernel: Bridge firewalling registered Sep 10 00:36:56.034938 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 10 00:36:56.036255 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:36:56.048332 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:36:56.049717 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:36:56.051741 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:36:56.053108 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:36:56.056955 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 00:36:56.067618 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:36:56.072046 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:36:56.075073 dracut-cmdline[226]: dracut-dracut-053 Sep 10 00:36:56.078733 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:36:56.116624 systemd-resolved[234]: Positive Trust Anchors: Sep 10 00:36:56.116648 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:36:56.116681 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:36:56.121482 systemd-resolved[234]: Defaulting to hostname 'linux'. Sep 10 00:36:56.123153 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:36:56.127392 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:36:56.179161 kernel: SCSI subsystem initialized Sep 10 00:36:56.189139 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:36:56.200146 kernel: iscsi: registered transport (tcp) Sep 10 00:36:56.221229 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:36:56.221257 kernel: QLogic iSCSI HBA Driver Sep 10 00:36:56.268653 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 00:36:56.277237 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 00:36:56.301561 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:36:56.301597 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:36:56.302554 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 10 00:36:56.343152 kernel: raid6: avx2x4 gen() 30370 MB/s Sep 10 00:36:56.360143 kernel: raid6: avx2x2 gen() 31098 MB/s Sep 10 00:36:56.377196 kernel: raid6: avx2x1 gen() 25746 MB/s Sep 10 00:36:56.377219 kernel: raid6: using algorithm avx2x2 gen() 31098 MB/s Sep 10 00:36:56.395181 kernel: raid6: .... xor() 19749 MB/s, rmw enabled Sep 10 00:36:56.395232 kernel: raid6: using avx2x2 recovery algorithm Sep 10 00:36:56.416147 kernel: xor: automatically using best checksumming function avx Sep 10 00:36:56.573163 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 00:36:56.585874 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:36:56.594314 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:36:56.607481 systemd-udevd[413]: Using default interface naming scheme 'v255'. Sep 10 00:36:56.612412 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:36:56.624279 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 00:36:56.637615 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Sep 10 00:36:56.669511 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:36:56.681307 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:36:56.753369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:36:56.761289 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 00:36:56.776701 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 00:36:56.779090 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:36:56.783094 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:36:56.784663 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:36:56.791211 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 10 00:36:56.796433 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:36:56.801971 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 00:36:56.802013 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:36:56.802029 kernel: GPT:9289727 != 19775487 Sep 10 00:36:56.802052 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:36:56.801947 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 00:36:56.808334 kernel: GPT:9289727 != 19775487 Sep 10 00:36:56.808355 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:36:56.808369 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:36:56.815416 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:36:56.821167 kernel: libata version 3.00 loaded. Sep 10 00:36:56.831139 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:36:56.842516 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 00:36:56.842833 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 00:36:56.842852 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 10 00:36:56.832413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:36:56.848533 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 00:36:56.836925 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:36:56.838285 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:36:56.855130 kernel: AVX2 version of gcm_enc/dec engaged. Sep 10 00:36:56.838544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:36:56.842522 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:36:56.860909 kernel: scsi host0: ahci Sep 10 00:36:56.861183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:36:56.872526 kernel: scsi host1: ahci Sep 10 00:36:56.874265 kernel: scsi host2: ahci Sep 10 00:36:56.874306 kernel: AES CTR mode by8 optimization enabled Sep 10 00:36:56.876971 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Sep 10 00:36:56.880144 kernel: scsi host3: ahci Sep 10 00:36:56.887229 kernel: scsi host4: ahci Sep 10 00:36:56.899288 kernel: BTRFS: device fsid 47ffa5df-7ab2-4f1a-b68f-595717991426 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (457) Sep 10 00:36:56.900478 kernel: scsi host5: ahci Sep 10 00:36:56.899265 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 00:36:56.923563 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 10 00:36:56.923591 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 10 00:36:56.923616 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 10 00:36:56.923632 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 10 00:36:56.923645 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 10 00:36:56.923656 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 10 00:36:56.948498 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 00:36:56.986550 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:36:56.995513 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 00:36:56.995588 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 00:36:57.011259 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 00:36:57.012400 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:36:57.012477 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:36:57.014735 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:36:57.016505 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:36:57.028917 disk-uuid[556]: Primary Header is updated. Sep 10 00:36:57.028917 disk-uuid[556]: Secondary Entries is updated. Sep 10 00:36:57.028917 disk-uuid[556]: Secondary Header is updated. Sep 10 00:36:57.032496 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:36:57.036484 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:36:57.038933 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:36:57.045302 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:36:57.074421 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:36:57.225181 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 00:36:57.225324 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 00:36:57.226150 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 00:36:57.227616 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 00:36:57.227711 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 00:36:57.229308 kernel: ata3.00: applying bridge limits Sep 10 00:36:57.229331 kernel: ata3.00: configured for UDMA/100 Sep 10 00:36:57.230144 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 00:36:57.234153 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 00:36:57.234192 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 00:36:57.281155 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 00:36:57.281438 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 00:36:57.295146 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 00:36:58.038895 disk-uuid[558]: The operation has completed successfully. Sep 10 00:36:58.040255 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:36:58.073664 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:36:58.073825 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 00:36:58.106392 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 00:36:58.113638 sh[595]: Success Sep 10 00:36:58.128165 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 10 00:36:58.165602 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 00:36:58.178955 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 00:36:58.182040 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 00:36:58.198240 kernel: BTRFS info (device dm-0): first mount of filesystem 47ffa5df-7ab2-4f1a-b68f-595717991426 Sep 10 00:36:58.198289 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:36:58.198305 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 10 00:36:58.200604 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 00:36:58.200628 kernel: BTRFS info (device dm-0): using free space tree Sep 10 00:36:58.205750 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 00:36:58.206555 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 00:36:58.216238 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 00:36:58.218055 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 00:36:58.231210 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:36:58.231252 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:36:58.231267 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:36:58.234151 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:36:58.245506 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:36:58.247401 kernel: BTRFS info (device vda6): last unmount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:36:58.287944 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 00:36:58.295259 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 00:36:58.349667 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:36:58.361299 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:36:58.361753 ignition[727]: Ignition 2.19.0 Sep 10 00:36:58.361763 ignition[727]: Stage: fetch-offline Sep 10 00:36:58.361827 ignition[727]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:36:58.361844 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:36:58.362011 ignition[727]: parsed url from cmdline: "" Sep 10 00:36:58.362017 ignition[727]: no config URL provided Sep 10 00:36:58.362025 ignition[727]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:36:58.362038 ignition[727]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:36:58.362074 ignition[727]: op(1): [started] loading QEMU firmware config module Sep 10 00:36:58.362081 ignition[727]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:36:58.371084 ignition[727]: op(1): [finished] loading QEMU firmware config module Sep 10 00:36:58.389162 systemd-networkd[781]: lo: Link UP Sep 10 00:36:58.389176 systemd-networkd[781]: lo: Gained carrier Sep 10 00:36:58.391543 systemd-networkd[781]: Enumeration completed Sep 10 00:36:58.392160 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:36:58.392165 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:36:58.394030 systemd-networkd[781]: eth0: Link UP Sep 10 00:36:58.394036 systemd-networkd[781]: eth0: Gained carrier Sep 10 00:36:58.394045 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:36:58.394242 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:36:58.397779 systemd[1]: Reached target network.target - Network. Sep 10 00:36:58.422190 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:36:58.432750 ignition[727]: parsing config with SHA512: 82f781b056434e3f3a071b761eca07f1fc5ae1e6774773b87744bfaba2bce02e97468ce4f14ba0562b0b787df78c7807649d7322aea55c06efb917a42a4517f3 Sep 10 00:36:58.436362 unknown[727]: fetched base config from "system" Sep 10 00:36:58.436373 unknown[727]: fetched user config from "qemu" Sep 10 00:36:58.436990 ignition[727]: fetch-offline: fetch-offline passed Sep 10 00:36:58.437106 ignition[727]: Ignition finished successfully Sep 10 00:36:58.439346 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:36:58.441815 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:36:58.456691 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 00:36:58.470760 ignition[787]: Ignition 2.19.0 Sep 10 00:36:58.470779 ignition[787]: Stage: kargs Sep 10 00:36:58.471025 ignition[787]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:36:58.471043 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:36:58.472386 ignition[787]: kargs: kargs passed Sep 10 00:36:58.472450 ignition[787]: Ignition finished successfully Sep 10 00:36:58.476219 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 00:36:58.489444 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 00:36:58.502805 ignition[795]: Ignition 2.19.0 Sep 10 00:36:58.502820 ignition[795]: Stage: disks Sep 10 00:36:58.503051 ignition[795]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:36:58.503068 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:36:58.506662 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 00:36:58.504200 ignition[795]: disks: disks passed Sep 10 00:36:58.508471 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 00:36:58.504264 ignition[795]: Ignition finished successfully Sep 10 00:36:58.510716 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:36:58.512152 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:36:58.514051 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:36:58.515267 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:36:58.532561 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 00:36:58.547854 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 10 00:36:58.557514 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 00:36:58.566398 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 00:36:58.686159 kernel: EXT4-fs (vda9): mounted filesystem 0a9bf3c7-f8cd-4d40-b949-283957ba2f96 r/w with ordered data mode. Quota mode: none. Sep 10 00:36:58.687428 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 00:36:58.688253 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 00:36:58.701223 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:36:58.703288 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 00:36:58.704544 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 00:36:58.704603 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:36:58.712582 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Sep 10 00:36:58.712606 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:36:58.704637 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:36:58.717547 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:36:58.717570 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:36:58.717584 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:36:58.713192 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 00:36:58.731493 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 00:36:58.735409 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:36:58.767746 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:36:58.772576 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:36:58.777815 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:36:58.782496 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:36:58.884541 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 00:36:58.902275 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 00:36:58.904444 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 00:36:58.913177 kernel: BTRFS info (device vda6): last unmount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:36:58.932821 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 00:36:58.934891 ignition[929]: INFO : Ignition 2.19.0 Sep 10 00:36:58.934891 ignition[929]: INFO : Stage: mount Sep 10 00:36:58.936665 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:36:58.936665 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:36:58.936665 ignition[929]: INFO : mount: mount passed Sep 10 00:36:58.936665 ignition[929]: INFO : Ignition finished successfully Sep 10 00:36:58.939398 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 00:36:58.951318 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 00:36:59.197873 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 00:36:59.215337 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:36:59.224873 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Sep 10 00:36:59.224930 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:36:59.225924 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:36:59.225943 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:36:59.229134 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:36:59.231112 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:36:59.258240 ignition[960]: INFO : Ignition 2.19.0 Sep 10 00:36:59.258240 ignition[960]: INFO : Stage: files Sep 10 00:36:59.260232 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:36:59.260232 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:36:59.260232 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:36:59.264345 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:36:59.264345 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:36:59.264345 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:36:59.264345 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:36:59.270209 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:36:59.270209 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 10 00:36:59.270209 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 10 00:36:59.264607 unknown[960]: wrote ssh authorized keys file for user: core Sep 10 00:36:59.302392 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 00:36:59.551232 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 10 00:36:59.551232 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 00:36:59.555551 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 10 00:36:59.618298 systemd-networkd[781]: eth0: Gained IPv6LL Sep 10 00:37:00.045747 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 00:37:01.036548 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 10 00:37:01.036548 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 00:37:01.041067 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:37:01.041067 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:37:01.041067 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 00:37:01.041067 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 00:37:01.041067 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:37:01.041067 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:37:01.041067 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 00:37:01.041067 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:37:01.073133 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:37:01.081855 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:37:01.083571 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:37:01.083571 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:37:01.083571 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:37:01.083571 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:37:01.083571 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:37:01.083571 ignition[960]: INFO : files: files passed Sep 10 00:37:01.083571 ignition[960]: INFO : Ignition finished successfully Sep 10 00:37:01.086042 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 00:37:01.098501 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 00:37:01.101963 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 00:37:01.104189 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:37:01.104355 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 00:37:01.118633 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 00:37:01.122310 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:37:01.122310 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:37:01.126919 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:37:01.125617 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:37:01.127765 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 00:37:01.146400 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 00:37:01.174338 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:37:01.174503 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 00:37:01.177364 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 00:37:01.178668 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 00:37:01.180680 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 00:37:01.192424 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 00:37:01.209398 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:37:01.221336 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 00:37:01.232689 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:37:01.234999 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:37:01.237420 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 00:37:01.239296 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:37:01.240318 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:37:01.242971 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 00:37:01.245099 systemd[1]: Stopped target basic.target - Basic System. Sep 10 00:37:01.246890 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 00:37:01.249223 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:37:01.251590 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 00:37:01.253830 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 00:37:01.256023 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:37:01.258544 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 00:37:01.260692 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 00:37:01.262816 systemd[1]: Stopped target swap.target - Swaps. Sep 10 00:37:01.264521 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:37:01.265566 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:37:01.267925 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:37:01.270174 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:37:01.272526 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 00:37:01.273481 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:37:01.276039 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:37:01.277060 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 00:37:01.279309 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:37:01.280378 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:37:01.282882 systemd[1]: Stopped target paths.target - Path Units. Sep 10 00:37:01.284805 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:37:01.288243 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:37:01.290955 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 00:37:01.292752 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 00:37:01.294628 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:37:01.295504 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:37:01.297440 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:37:01.298363 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:37:01.300511 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:37:01.301701 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:37:01.304228 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:37:01.305220 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 00:37:01.320294 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 00:37:01.323155 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 00:37:01.324962 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:37:01.326060 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:37:01.328446 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:37:01.329504 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:37:01.336131 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:37:01.336297 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 00:37:01.357504 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:37:01.395626 ignition[1014]: INFO : Ignition 2.19.0 Sep 10 00:37:01.395626 ignition[1014]: INFO : Stage: umount Sep 10 00:37:01.420882 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:37:01.420882 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:37:01.423649 ignition[1014]: INFO : umount: umount passed Sep 10 00:37:01.424466 ignition[1014]: INFO : Ignition finished successfully Sep 10 00:37:01.427278 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:37:01.427486 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 00:37:01.428671 systemd[1]: Stopped target network.target - Network. Sep 10 00:37:01.431981 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:37:01.432132 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 00:37:01.433259 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:37:01.433321 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 00:37:01.436610 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:37:01.436674 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 00:37:01.439411 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 00:37:01.440312 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 00:37:01.444167 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 00:37:01.446503 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 00:37:01.455253 systemd-networkd[781]: eth0: DHCPv6 lease lost Sep 10 00:37:01.455721 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:37:01.455922 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 00:37:01.459105 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 00:37:01.459221 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:37:01.460099 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:37:01.460326 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 00:37:01.463607 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:37:01.463684 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:37:01.475243 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 00:37:01.475334 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:37:01.475403 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:37:01.477200 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:37:01.477261 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:37:01.477559 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:37:01.477605 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 00:37:01.477984 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:37:01.488962 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:37:01.489110 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 00:37:01.494052 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:37:01.494267 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:37:01.497398 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:37:01.497451 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 00:37:01.499361 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:37:01.499405 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:37:01.501510 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:37:01.501564 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:37:01.503665 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:37:01.503715 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 00:37:01.505624 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:37:01.505675 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:37:01.515332 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 00:37:01.516461 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:37:01.516532 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:37:01.518758 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 00:37:01.518812 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:37:01.521001 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:37:01.521057 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:37:01.523473 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:37:01.523533 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:37:01.526020 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:37:01.526154 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 00:37:02.002539 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:37:02.002744 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 00:37:02.004556 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 00:37:02.005521 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:37:02.005633 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 00:37:02.019499 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 00:37:02.028360 systemd[1]: Switching root. Sep 10 00:37:02.074382 systemd-journald[193]: Journal stopped Sep 10 00:37:03.643049 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 10 00:37:03.643171 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:37:03.643186 kernel: SELinux: policy capability open_perms=1 Sep 10 00:37:03.643198 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:37:03.643209 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:37:03.643232 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:37:03.643250 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:37:03.643262 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:37:03.643274 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:37:03.643286 kernel: audit: type=1403 audit(1757464622.773:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:37:03.643302 systemd[1]: Successfully loaded SELinux policy in 43.103ms. Sep 10 00:37:03.643336 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.740ms. Sep 10 00:37:03.643349 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:37:03.643362 systemd[1]: Detected virtualization kvm. Sep 10 00:37:03.643377 systemd[1]: Detected architecture x86-64. Sep 10 00:37:03.643389 systemd[1]: Detected first boot. Sep 10 00:37:03.643401 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:37:03.643413 zram_generator::config[1058]: No configuration found. Sep 10 00:37:03.643433 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:37:03.643446 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 00:37:03.643459 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 00:37:03.643472 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 00:37:03.643493 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 00:37:03.643505 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 00:37:03.643517 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 00:37:03.643529 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 00:37:03.643542 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 00:37:03.643554 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 00:37:03.643567 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 00:37:03.643579 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 00:37:03.643598 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:37:03.643611 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:37:03.643623 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 00:37:03.643636 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 00:37:03.643649 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 00:37:03.643661 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:37:03.643673 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 10 00:37:03.643685 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:37:03.643697 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 00:37:03.643717 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 00:37:03.643729 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 00:37:03.643742 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 00:37:03.643755 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:37:03.643767 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:37:03.643782 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:37:03.643794 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:37:03.643806 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 00:37:03.643821 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 00:37:03.643833 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:37:03.643854 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:37:03.643867 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:37:03.643879 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 00:37:03.643891 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 00:37:03.643903 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 00:37:03.643915 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 00:37:03.643928 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:37:03.643946 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 00:37:03.643959 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 00:37:03.643971 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 00:37:03.643984 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:37:03.643996 systemd[1]: Reached target machines.target - Containers. Sep 10 00:37:03.644009 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 00:37:03.644021 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:37:03.644033 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:37:03.644050 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 00:37:03.644063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:37:03.644075 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:37:03.644088 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:37:03.644100 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 00:37:03.644129 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:37:03.644142 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:37:03.644155 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 00:37:03.644167 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 00:37:03.644187 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 00:37:03.644200 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 00:37:03.644212 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:37:03.644224 kernel: fuse: init (API version 7.39) Sep 10 00:37:03.644236 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:37:03.644248 kernel: loop: module loaded Sep 10 00:37:03.644260 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 00:37:03.644272 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 00:37:03.644285 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:37:03.644300 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 00:37:03.644312 systemd[1]: Stopped verity-setup.service. Sep 10 00:37:03.644324 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:37:03.644336 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 00:37:03.644348 kernel: ACPI: bus type drm_connector registered Sep 10 00:37:03.644360 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 00:37:03.644372 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 00:37:03.644384 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 00:37:03.644402 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 00:37:03.644414 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 00:37:03.644427 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:37:03.644439 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:37:03.644452 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 00:37:03.644467 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:37:03.644479 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:37:03.644492 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:37:03.644504 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:37:03.644516 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:37:03.644552 systemd-journald[1128]: Collecting audit messages is disabled. Sep 10 00:37:03.644580 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:37:03.644593 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:37:03.644605 systemd-journald[1128]: Journal started Sep 10 00:37:03.644627 systemd-journald[1128]: Runtime Journal (/run/log/journal/04f7e83b1e03483a858c83f4f66f3651) is 6.0M, max 48.3M, 42.2M free. Sep 10 00:37:03.324536 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:37:03.340386 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 00:37:03.340873 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 00:37:03.646617 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 00:37:03.649954 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:37:03.651490 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:37:03.651724 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:37:03.653446 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:37:03.655198 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 00:37:03.657032 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 00:37:03.679237 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 00:37:03.694477 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 00:37:03.698254 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 00:37:03.699661 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:37:03.699718 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:37:03.702664 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 10 00:37:03.706206 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 00:37:03.709190 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 00:37:03.710620 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:37:03.713643 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 00:37:03.718280 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 00:37:03.719987 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:37:03.730550 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 00:37:03.731957 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:37:03.736654 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:37:03.754492 systemd-journald[1128]: Time spent on flushing to /var/log/journal/04f7e83b1e03483a858c83f4f66f3651 is 58.024ms for 993 entries. Sep 10 00:37:03.754492 systemd-journald[1128]: System Journal (/var/log/journal/04f7e83b1e03483a858c83f4f66f3651) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:37:04.398051 systemd-journald[1128]: Received client request to flush runtime journal. Sep 10 00:37:04.398095 kernel: loop0: detected capacity change from 0 to 224512 Sep 10 00:37:04.398145 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:37:04.398160 kernel: loop1: detected capacity change from 0 to 142488 Sep 10 00:37:04.398173 kernel: loop2: detected capacity change from 0 to 140768 Sep 10 00:37:04.398186 kernel: loop3: detected capacity change from 0 to 224512 Sep 10 00:37:04.398200 kernel: loop4: detected capacity change from 0 to 142488 Sep 10 00:37:04.398216 kernel: loop5: detected capacity change from 0 to 140768 Sep 10 00:37:03.792196 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 00:37:03.795910 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:37:03.798963 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:37:03.800494 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 00:37:03.801802 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 00:37:03.803353 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 00:37:03.810334 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 10 00:37:03.829242 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 10 00:37:03.841916 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:37:03.873071 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Sep 10 00:37:03.873085 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Sep 10 00:37:03.879158 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:37:03.964972 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 00:37:03.973466 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 00:37:04.101046 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 00:37:04.112568 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:37:04.137280 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Sep 10 00:37:04.137296 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Sep 10 00:37:04.142719 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:37:04.270168 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 00:37:04.270785 (sd-merge)[1190]: Merged extensions into '/usr'. Sep 10 00:37:04.274720 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 00:37:04.274731 systemd[1]: Reloading... Sep 10 00:37:04.431299 zram_generator::config[1213]: No configuration found. Sep 10 00:37:04.532392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:37:04.542591 ldconfig[1159]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:37:04.582251 systemd[1]: Reloading finished in 306 ms. Sep 10 00:37:04.672447 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 00:37:04.674510 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 00:37:04.676455 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 00:37:04.678507 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 00:37:04.693263 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 00:37:04.731605 systemd[1]: Starting ensure-sysext.service... Sep 10 00:37:04.734057 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 10 00:37:04.739267 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:37:04.744413 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Sep 10 00:37:04.744440 systemd[1]: Reloading... Sep 10 00:37:04.849153 zram_generator::config[1284]: No configuration found. Sep 10 00:37:04.889609 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:37:04.890688 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 00:37:04.891942 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:37:04.892404 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 10 00:37:04.892507 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Sep 10 00:37:04.896671 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:37:04.896686 systemd-tmpfiles[1262]: Skipping /boot Sep 10 00:37:04.913285 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:37:04.913314 systemd-tmpfiles[1262]: Skipping /boot Sep 10 00:37:05.008330 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:37:05.063336 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:37:05.063552 systemd[1]: Reloading finished in 318 ms. Sep 10 00:37:05.100704 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 10 00:37:05.102418 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:37:05.111971 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:37:05.140020 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 00:37:05.142452 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 00:37:05.146382 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:37:05.148605 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 00:37:05.151456 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:37:05.151626 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:37:05.154663 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:37:05.157173 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:37:05.159631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:37:05.160741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:37:05.160880 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:37:05.162065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:37:05.162323 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:37:05.166758 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:37:05.166948 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:37:05.168332 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:37:05.169501 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:37:05.171548 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 00:37:05.172532 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:37:05.173426 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:37:05.173612 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:37:05.175371 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:37:05.175551 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:37:05.177093 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:37:05.177317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:37:05.187399 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:37:05.187612 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:37:05.189045 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:37:05.191141 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:37:05.194370 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:37:05.197284 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:37:05.199061 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:37:05.199237 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:37:05.200259 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:37:05.200456 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:37:05.202737 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:37:05.203171 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:37:05.204928 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:37:05.205382 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:37:05.208091 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:37:05.208407 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:37:05.212338 systemd[1]: Finished ensure-sysext.service. Sep 10 00:37:05.217460 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:37:05.217537 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:37:05.226345 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 00:37:05.231866 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 00:37:05.247180 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 00:37:05.260059 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 00:37:05.271481 augenrules[1375]: No rules Sep 10 00:37:05.272321 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:37:05.275917 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 00:37:05.277449 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:37:05.320678 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 00:37:05.322158 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 00:37:05.326517 systemd-resolved[1332]: Positive Trust Anchors: Sep 10 00:37:05.326540 systemd-resolved[1332]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:37:05.326572 systemd-resolved[1332]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:37:05.330512 systemd-resolved[1332]: Defaulting to hostname 'linux'. Sep 10 00:37:05.332470 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:37:05.333641 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:37:05.473870 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 00:37:05.489646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:37:05.492501 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 00:37:05.507703 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 00:37:05.513925 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Sep 10 00:37:05.532250 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:37:05.546416 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:37:05.618328 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 10 00:37:05.635631 systemd-networkd[1393]: lo: Link UP Sep 10 00:37:05.635645 systemd-networkd[1393]: lo: Gained carrier Sep 10 00:37:05.636448 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1405) Sep 10 00:37:05.637540 systemd-networkd[1393]: Enumeration completed Sep 10 00:37:05.637642 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:37:05.639000 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:37:05.639010 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:37:05.639226 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 10 00:37:05.639577 systemd[1]: Reached target network.target - Network. Sep 10 00:37:05.640738 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:37:05.640779 systemd-networkd[1393]: eth0: Link UP Sep 10 00:37:05.640791 systemd-networkd[1393]: eth0: Gained carrier Sep 10 00:37:05.640802 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:37:05.647490 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 00:37:05.649238 kernel: ACPI: button: Power Button [PWRF] Sep 10 00:37:05.651548 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:37:05.654230 systemd-timesyncd[1352]: Network configuration changed, trying to establish connection. Sep 10 00:37:05.656190 systemd-timesyncd[1352]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:37:05.656252 systemd-timesyncd[1352]: Initial clock synchronization to Wed 2025-09-10 00:37:05.777886 UTC. Sep 10 00:37:05.666539 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 10 00:37:05.677653 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 00:37:05.690307 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 10 00:37:05.690351 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 10 00:37:05.694564 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 00:37:05.740523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:37:05.763935 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 00:37:05.772452 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:37:05.772685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:37:05.779298 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:37:05.785210 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:37:05.787985 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 00:37:05.809676 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 00:37:05.853221 kernel: kvm_amd: TSC scaling supported Sep 10 00:37:05.853351 kernel: kvm_amd: Nested Virtualization enabled Sep 10 00:37:05.853387 kernel: kvm_amd: Nested Paging enabled Sep 10 00:37:05.854351 kernel: kvm_amd: LBR virtualization supported Sep 10 00:37:05.854421 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 10 00:37:05.855529 kernel: kvm_amd: Virtual GIF supported Sep 10 00:37:05.881157 kernel: EDAC MC: Ver: 3.0.0 Sep 10 00:37:05.898344 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:37:05.917528 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 10 00:37:05.929319 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 10 00:37:05.941274 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:37:05.980023 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 10 00:37:05.981930 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:37:05.983227 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:37:05.984596 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 00:37:05.986128 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 00:37:05.987908 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 00:37:05.989352 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 00:37:05.990845 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 00:37:05.992268 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:37:05.992295 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:37:05.993333 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:37:05.995394 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 00:37:05.998321 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 00:37:06.012395 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 00:37:06.015305 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 10 00:37:06.017153 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 00:37:06.018452 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:37:06.019473 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:37:06.020553 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:37:06.020581 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:37:06.021699 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 00:37:06.024203 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 00:37:06.028212 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:37:06.029236 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 00:37:06.032435 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 00:37:06.033837 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 00:37:06.035950 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 00:37:06.044327 jq[1440]: false Sep 10 00:37:06.044247 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 00:37:06.049334 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 00:37:06.052426 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 00:37:06.058592 extend-filesystems[1441]: Found loop3 Sep 10 00:37:06.065257 extend-filesystems[1441]: Found loop4 Sep 10 00:37:06.065257 extend-filesystems[1441]: Found loop5 Sep 10 00:37:06.065257 extend-filesystems[1441]: Found sr0 Sep 10 00:37:06.065257 extend-filesystems[1441]: Found vda Sep 10 00:37:06.065257 extend-filesystems[1441]: Found vda1 Sep 10 00:37:06.065257 extend-filesystems[1441]: Found vda2 Sep 10 00:37:06.065257 extend-filesystems[1441]: Found vda3 Sep 10 00:37:06.065257 extend-filesystems[1441]: Found usr Sep 10 00:37:06.065257 extend-filesystems[1441]: Found vda4 Sep 10 00:37:06.065257 extend-filesystems[1441]: Found vda6 Sep 10 00:37:06.065257 extend-filesystems[1441]: Found vda7 Sep 10 00:37:06.065257 extend-filesystems[1441]: Found vda9 Sep 10 00:37:06.065257 extend-filesystems[1441]: Checking size of /dev/vda9 Sep 10 00:37:06.088787 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:37:06.064431 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 00:37:06.087562 dbus-daemon[1439]: [system] SELinux support is enabled Sep 10 00:37:06.089262 extend-filesystems[1441]: Resized partition /dev/vda9 Sep 10 00:37:06.068948 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:37:06.090818 extend-filesystems[1458]: resize2fs 1.47.1 (20-May-2024) Sep 10 00:37:06.098330 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1397) Sep 10 00:37:06.071902 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:37:06.080840 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 00:37:06.094823 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 00:37:06.102658 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 00:37:06.108274 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 10 00:37:06.120878 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:37:06.121262 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 00:37:06.121716 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:37:06.122052 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 00:37:06.126822 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:37:06.127117 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 00:37:06.129200 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:37:06.165289 update_engine[1457]: I20250910 00:37:06.155030 1457 main.cc:92] Flatcar Update Engine starting Sep 10 00:37:06.165289 update_engine[1457]: I20250910 00:37:06.160681 1457 update_check_scheduler.cc:74] Next update check in 9m16s Sep 10 00:37:06.153522 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 00:37:06.169753 jq[1461]: true Sep 10 00:37:06.170186 extend-filesystems[1458]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:37:06.170186 extend-filesystems[1458]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:37:06.170186 extend-filesystems[1458]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:37:06.169747 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:37:06.179656 jq[1472]: true Sep 10 00:37:06.179836 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Sep 10 00:37:06.170001 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 00:37:06.188250 tar[1465]: linux-amd64/LICENSE Sep 10 00:37:06.190429 tar[1465]: linux-amd64/helm Sep 10 00:37:06.191885 systemd[1]: Started update-engine.service - Update Engine. Sep 10 00:37:06.194192 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:37:06.194244 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 00:37:06.196280 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:37:06.196318 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 00:37:06.196890 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Sep 10 00:37:06.197108 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 00:37:06.201512 systemd-logind[1448]: New seat seat0. Sep 10 00:37:06.210337 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 00:37:06.211944 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 00:37:06.269858 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:37:06.270775 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 00:37:06.273105 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 00:37:06.286709 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:37:06.303934 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:37:06.362332 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 00:37:06.412135 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 00:37:06.419089 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:37:06.419376 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 00:37:06.423815 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 00:37:06.504815 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 00:37:06.549663 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 00:37:06.554208 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 10 00:37:06.555683 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 00:37:06.742313 containerd[1471]: time="2025-09-10T00:37:06.742152896Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 10 00:37:06.786710 containerd[1471]: time="2025-09-10T00:37:06.786548397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:06.789660 containerd[1471]: time="2025-09-10T00:37:06.789558131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:37:06.789660 containerd[1471]: time="2025-09-10T00:37:06.789612533Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:37:06.789660 containerd[1471]: time="2025-09-10T00:37:06.789639316Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:37:06.790026 containerd[1471]: time="2025-09-10T00:37:06.789951304Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 10 00:37:06.790026 containerd[1471]: time="2025-09-10T00:37:06.789985749Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:06.790143 containerd[1471]: time="2025-09-10T00:37:06.790100349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:37:06.790348 containerd[1471]: time="2025-09-10T00:37:06.790238248Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:06.791336 containerd[1471]: time="2025-09-10T00:37:06.790690721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:37:06.791336 containerd[1471]: time="2025-09-10T00:37:06.790714282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:06.791336 containerd[1471]: time="2025-09-10T00:37:06.790741732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:37:06.791336 containerd[1471]: time="2025-09-10T00:37:06.790752866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:06.791336 containerd[1471]: time="2025-09-10T00:37:06.790884587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:06.791336 containerd[1471]: time="2025-09-10T00:37:06.791259267Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:37:06.791535 containerd[1471]: time="2025-09-10T00:37:06.791436577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:37:06.791535 containerd[1471]: time="2025-09-10T00:37:06.791458756Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:37:06.791701 containerd[1471]: time="2025-09-10T00:37:06.791674095Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:37:06.791804 containerd[1471]: time="2025-09-10T00:37:06.791780820Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:37:06.899874 tar[1465]: linux-amd64/README.md Sep 10 00:37:06.922983 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 00:37:07.007393 containerd[1471]: time="2025-09-10T00:37:07.007246698Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:37:07.007393 containerd[1471]: time="2025-09-10T00:37:07.007349720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:37:07.007393 containerd[1471]: time="2025-09-10T00:37:07.007385067Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 10 00:37:07.007393 containerd[1471]: time="2025-09-10T00:37:07.007405762Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 10 00:37:07.007602 containerd[1471]: time="2025-09-10T00:37:07.007426911Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:37:07.007802 containerd[1471]: time="2025-09-10T00:37:07.007765109Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:37:07.008207 containerd[1471]: time="2025-09-10T00:37:07.008170861Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:37:07.008409 containerd[1471]: time="2025-09-10T00:37:07.008385725Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 10 00:37:07.008452 containerd[1471]: time="2025-09-10T00:37:07.008412273Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 10 00:37:07.008452 containerd[1471]: time="2025-09-10T00:37:07.008433100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 10 00:37:07.008490 containerd[1471]: time="2025-09-10T00:37:07.008452029Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:37:07.008490 containerd[1471]: time="2025-09-10T00:37:07.008470202Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:37:07.008490 containerd[1471]: time="2025-09-10T00:37:07.008487203Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:37:07.008559 containerd[1471]: time="2025-09-10T00:37:07.008507243Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:37:07.008559 containerd[1471]: time="2025-09-10T00:37:07.008528726Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:37:07.008559 containerd[1471]: time="2025-09-10T00:37:07.008546686Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:37:07.008615 containerd[1471]: time="2025-09-10T00:37:07.008562639Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:37:07.008615 containerd[1471]: time="2025-09-10T00:37:07.008578460Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:37:07.008615 containerd[1471]: time="2025-09-10T00:37:07.008606603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008678 containerd[1471]: time="2025-09-10T00:37:07.008626148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008678 containerd[1471]: time="2025-09-10T00:37:07.008656933Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008678 containerd[1471]: time="2025-09-10T00:37:07.008673602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008743 containerd[1471]: time="2025-09-10T00:37:07.008693138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008743 containerd[1471]: time="2025-09-10T00:37:07.008712581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008743 containerd[1471]: time="2025-09-10T00:37:07.008727646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008821 containerd[1471]: time="2025-09-10T00:37:07.008742792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008821 containerd[1471]: time="2025-09-10T00:37:07.008760369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008821 containerd[1471]: time="2025-09-10T00:37:07.008779298Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008821 containerd[1471]: time="2025-09-10T00:37:07.008793466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008821 containerd[1471]: time="2025-09-10T00:37:07.008811870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008916 containerd[1471]: time="2025-09-10T00:37:07.008828005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008916 containerd[1471]: time="2025-09-10T00:37:07.008854230Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 10 00:37:07.008916 containerd[1471]: time="2025-09-10T00:37:07.008883662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008916 containerd[1471]: time="2025-09-10T00:37:07.008898254Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.008916 containerd[1471]: time="2025-09-10T00:37:07.008911059Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:37:07.009017 containerd[1471]: time="2025-09-10T00:37:07.008976403Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:37:07.009017 containerd[1471]: time="2025-09-10T00:37:07.009003263Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 10 00:37:07.009017 containerd[1471]: time="2025-09-10T00:37:07.009016674Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:37:07.009107 containerd[1471]: time="2025-09-10T00:37:07.009087608Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 10 00:37:07.009107 containerd[1471]: time="2025-09-10T00:37:07.009102038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.009161 containerd[1471]: time="2025-09-10T00:37:07.009118485Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 10 00:37:07.009161 containerd[1471]: time="2025-09-10T00:37:07.009147374Z" level=info msg="NRI interface is disabled by configuration." Sep 10 00:37:07.009161 containerd[1471]: time="2025-09-10T00:37:07.009159088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:37:07.010343 containerd[1471]: time="2025-09-10T00:37:07.010020349Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:37:07.010343 containerd[1471]: time="2025-09-10T00:37:07.010149707Z" level=info msg="Connect containerd service" Sep 10 00:37:07.010343 containerd[1471]: time="2025-09-10T00:37:07.010263436Z" level=info msg="using legacy CRI server" Sep 10 00:37:07.010343 containerd[1471]: time="2025-09-10T00:37:07.010315138Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 00:37:07.010792 containerd[1471]: time="2025-09-10T00:37:07.010481114Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:37:07.011494 containerd[1471]: time="2025-09-10T00:37:07.011464094Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:37:07.011645 containerd[1471]: time="2025-09-10T00:37:07.011607962Z" level=info msg="Start subscribing containerd event" Sep 10 00:37:07.011678 containerd[1471]: time="2025-09-10T00:37:07.011662732Z" level=info msg="Start recovering state" Sep 10 00:37:07.011748 containerd[1471]: time="2025-09-10T00:37:07.011732296Z" level=info msg="Start event monitor" Sep 10 00:37:07.011774 containerd[1471]: time="2025-09-10T00:37:07.011761042Z" level=info msg="Start snapshots syncer" Sep 10 00:37:07.011809 containerd[1471]: time="2025-09-10T00:37:07.011787197Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:37:07.011851 containerd[1471]: time="2025-09-10T00:37:07.011807963Z" level=info msg="Start streaming server" Sep 10 00:37:07.011954 containerd[1471]: time="2025-09-10T00:37:07.011898090Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:37:07.011985 containerd[1471]: time="2025-09-10T00:37:07.011970256Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:37:07.012048 containerd[1471]: time="2025-09-10T00:37:07.012032513Z" level=info msg="containerd successfully booted in 0.271959s" Sep 10 00:37:07.012167 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 00:37:07.235275 systemd-networkd[1393]: eth0: Gained IPv6LL Sep 10 00:37:07.239701 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 00:37:07.241845 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 00:37:07.255542 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 00:37:07.258983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:37:07.262235 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 00:37:07.287819 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 00:37:07.288167 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 00:37:07.290350 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 00:37:07.293537 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 00:37:08.419047 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:37:08.421064 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 00:37:08.422475 systemd[1]: Startup finished in 1.471s (kernel) + 6.990s (initrd) + 5.690s (userspace) = 14.151s. Sep 10 00:37:08.424816 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:37:09.064759 kubelet[1552]: E0910 00:37:09.064670 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:37:09.069030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:37:09.069262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:37:09.069622 systemd[1]: kubelet.service: Consumed 1.633s CPU time. Sep 10 00:37:09.166829 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 00:37:09.168706 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:51608.service - OpenSSH per-connection server daemon (10.0.0.1:51608). Sep 10 00:37:09.213528 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 51608 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:37:09.215559 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:37:09.225698 systemd-logind[1448]: New session 1 of user core. Sep 10 00:37:09.227326 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 00:37:09.240416 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 00:37:09.253514 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 00:37:09.256748 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 00:37:09.266506 (systemd)[1569]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:37:09.398760 systemd[1569]: Queued start job for default target default.target. Sep 10 00:37:09.410587 systemd[1569]: Created slice app.slice - User Application Slice. Sep 10 00:37:09.410615 systemd[1569]: Reached target paths.target - Paths. Sep 10 00:37:09.410630 systemd[1569]: Reached target timers.target - Timers. Sep 10 00:37:09.412406 systemd[1569]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 00:37:09.425301 systemd[1569]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 00:37:09.425441 systemd[1569]: Reached target sockets.target - Sockets. Sep 10 00:37:09.425461 systemd[1569]: Reached target basic.target - Basic System. Sep 10 00:37:09.425501 systemd[1569]: Reached target default.target - Main User Target. Sep 10 00:37:09.425536 systemd[1569]: Startup finished in 149ms. Sep 10 00:37:09.425959 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 00:37:09.428027 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 00:37:09.490760 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:51616.service - OpenSSH per-connection server daemon (10.0.0.1:51616). Sep 10 00:37:09.532688 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 51616 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:37:09.534721 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:37:09.538991 systemd-logind[1448]: New session 2 of user core. Sep 10 00:37:09.547521 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 00:37:09.602609 sshd[1580]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:09.611921 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:51616.service: Deactivated successfully. Sep 10 00:37:09.613747 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:37:09.615438 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:37:09.632416 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:51620.service - OpenSSH per-connection server daemon (10.0.0.1:51620). Sep 10 00:37:09.633331 systemd-logind[1448]: Removed session 2. Sep 10 00:37:09.662208 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 51620 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:37:09.663771 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:37:09.668256 systemd-logind[1448]: New session 3 of user core. Sep 10 00:37:09.678258 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 00:37:09.728924 sshd[1587]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:09.744532 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:51620.service: Deactivated successfully. Sep 10 00:37:09.746932 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:37:09.748557 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:37:09.761411 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:51632.service - OpenSSH per-connection server daemon (10.0.0.1:51632). Sep 10 00:37:09.762387 systemd-logind[1448]: Removed session 3. Sep 10 00:37:09.792667 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 51632 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:37:09.794549 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:37:09.799374 systemd-logind[1448]: New session 4 of user core. Sep 10 00:37:09.809269 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 00:37:09.864144 sshd[1594]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:09.876009 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:51632.service: Deactivated successfully. Sep 10 00:37:09.877891 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:37:09.879270 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:37:09.890409 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:56364.service - OpenSSH per-connection server daemon (10.0.0.1:56364). Sep 10 00:37:09.891452 systemd-logind[1448]: Removed session 4. Sep 10 00:37:09.920747 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 56364 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:37:09.922710 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:37:09.927246 systemd-logind[1448]: New session 5 of user core. Sep 10 00:37:09.935273 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 00:37:09.996279 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 00:37:09.996722 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:37:10.018056 sudo[1604]: pam_unix(sudo:session): session closed for user root Sep 10 00:37:10.020298 sshd[1601]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:10.031054 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:56364.service: Deactivated successfully. Sep 10 00:37:10.033606 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:37:10.035833 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:37:10.043444 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:56366.service - OpenSSH per-connection server daemon (10.0.0.1:56366). Sep 10 00:37:10.044759 systemd-logind[1448]: Removed session 5. Sep 10 00:37:10.075100 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 56366 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:37:10.077303 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:37:10.081756 systemd-logind[1448]: New session 6 of user core. Sep 10 00:37:10.091275 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 00:37:10.148365 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 00:37:10.148755 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:37:10.153671 sudo[1613]: pam_unix(sudo:session): session closed for user root Sep 10 00:37:10.162187 sudo[1612]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 10 00:37:10.162557 sudo[1612]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:37:10.265780 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 10 00:37:10.268388 auditctl[1616]: No rules Sep 10 00:37:10.269035 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:37:10.269426 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 10 00:37:10.274721 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:37:10.314090 augenrules[1634]: No rules Sep 10 00:37:10.315372 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:37:10.317022 sudo[1612]: pam_unix(sudo:session): session closed for user root Sep 10 00:37:10.319201 sshd[1609]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:10.334898 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:56366.service: Deactivated successfully. Sep 10 00:37:10.337655 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:37:10.340114 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:37:10.357539 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:56380.service - OpenSSH per-connection server daemon (10.0.0.1:56380). Sep 10 00:37:10.358619 systemd-logind[1448]: Removed session 6. Sep 10 00:37:10.389567 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 56380 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:37:10.391620 sshd[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:37:10.399198 systemd-logind[1448]: New session 7 of user core. Sep 10 00:37:10.420604 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 00:37:10.489223 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:37:10.491186 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:37:11.206456 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 00:37:11.206557 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 00:37:11.989387 dockerd[1664]: time="2025-09-10T00:37:11.989288929Z" level=info msg="Starting up" Sep 10 00:37:12.763090 dockerd[1664]: time="2025-09-10T00:37:12.763007980Z" level=info msg="Loading containers: start." Sep 10 00:37:12.888149 kernel: Initializing XFRM netlink socket Sep 10 00:37:12.975865 systemd-networkd[1393]: docker0: Link UP Sep 10 00:37:13.171912 dockerd[1664]: time="2025-09-10T00:37:13.171769316Z" level=info msg="Loading containers: done." Sep 10 00:37:13.278650 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2969729401-merged.mount: Deactivated successfully. Sep 10 00:37:13.373276 dockerd[1664]: time="2025-09-10T00:37:13.373189721Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:37:13.373487 dockerd[1664]: time="2025-09-10T00:37:13.373369877Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 10 00:37:13.373669 dockerd[1664]: time="2025-09-10T00:37:13.373634558Z" level=info msg="Daemon has completed initialization" Sep 10 00:37:13.458287 dockerd[1664]: time="2025-09-10T00:37:13.457904834Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:37:13.458460 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 00:37:14.652414 containerd[1471]: time="2025-09-10T00:37:14.652341177Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 10 00:37:15.367497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446744473.mount: Deactivated successfully. Sep 10 00:37:16.573670 containerd[1471]: time="2025-09-10T00:37:16.573608037Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:16.574410 containerd[1471]: time="2025-09-10T00:37:16.574315051Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 10 00:37:16.575520 containerd[1471]: time="2025-09-10T00:37:16.575477850Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:16.578458 containerd[1471]: time="2025-09-10T00:37:16.578429750Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:16.579450 containerd[1471]: time="2025-09-10T00:37:16.579407225Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 1.927000987s" Sep 10 00:37:16.579522 containerd[1471]: time="2025-09-10T00:37:16.579454169Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 10 00:37:16.580141 containerd[1471]: time="2025-09-10T00:37:16.580107057Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 10 00:37:17.838537 containerd[1471]: time="2025-09-10T00:37:17.838471330Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:17.839640 containerd[1471]: time="2025-09-10T00:37:17.839518473Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 10 00:37:17.841090 containerd[1471]: time="2025-09-10T00:37:17.841055390Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:17.844765 containerd[1471]: time="2025-09-10T00:37:17.844686311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:17.847046 containerd[1471]: time="2025-09-10T00:37:17.846942092Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.266712233s" Sep 10 00:37:17.849030 containerd[1471]: time="2025-09-10T00:37:17.847201706Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 10 00:37:17.849030 containerd[1471]: time="2025-09-10T00:37:17.848795601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 10 00:37:19.320159 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:37:19.334371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:37:19.551512 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:37:19.558701 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:37:19.981909 containerd[1471]: time="2025-09-10T00:37:19.981839606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:19.984398 containerd[1471]: time="2025-09-10T00:37:19.984019603Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 10 00:37:19.985805 containerd[1471]: time="2025-09-10T00:37:19.985741411Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:19.988917 containerd[1471]: time="2025-09-10T00:37:19.988872678Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:19.989995 containerd[1471]: time="2025-09-10T00:37:19.989960590Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.141138132s" Sep 10 00:37:19.989995 containerd[1471]: time="2025-09-10T00:37:19.989996777Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 10 00:37:19.991202 containerd[1471]: time="2025-09-10T00:37:19.991156630Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 10 00:37:19.995411 kubelet[1887]: E0910 00:37:19.995299 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:37:20.003144 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:37:20.003374 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:37:21.447887 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4005051422.mount: Deactivated successfully. Sep 10 00:37:22.293763 containerd[1471]: time="2025-09-10T00:37:22.293658192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:22.294891 containerd[1471]: time="2025-09-10T00:37:22.294805350Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 10 00:37:22.296270 containerd[1471]: time="2025-09-10T00:37:22.296207294Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:22.299058 containerd[1471]: time="2025-09-10T00:37:22.298966779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:22.299632 containerd[1471]: time="2025-09-10T00:37:22.299588414Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.308384426s" Sep 10 00:37:22.299632 containerd[1471]: time="2025-09-10T00:37:22.299628628Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 10 00:37:22.300462 containerd[1471]: time="2025-09-10T00:37:22.300424620Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 00:37:23.232489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount437686025.mount: Deactivated successfully. Sep 10 00:37:25.112609 containerd[1471]: time="2025-09-10T00:37:25.112514906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:25.113298 containerd[1471]: time="2025-09-10T00:37:25.113244517Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 10 00:37:25.114895 containerd[1471]: time="2025-09-10T00:37:25.114832631Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:25.118746 containerd[1471]: time="2025-09-10T00:37:25.118711699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:25.120537 containerd[1471]: time="2025-09-10T00:37:25.120465391Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.820007476s" Sep 10 00:37:25.120581 containerd[1471]: time="2025-09-10T00:37:25.120540760Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 10 00:37:25.121199 containerd[1471]: time="2025-09-10T00:37:25.121161275Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:37:25.808227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3184984779.mount: Deactivated successfully. Sep 10 00:37:25.815050 containerd[1471]: time="2025-09-10T00:37:25.815010863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:25.816229 containerd[1471]: time="2025-09-10T00:37:25.816172627Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 10 00:37:25.817385 containerd[1471]: time="2025-09-10T00:37:25.817343027Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:25.843494 containerd[1471]: time="2025-09-10T00:37:25.843421124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:25.844405 containerd[1471]: time="2025-09-10T00:37:25.844359202Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 723.159559ms" Sep 10 00:37:25.844468 containerd[1471]: time="2025-09-10T00:37:25.844406747Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 00:37:25.844954 containerd[1471]: time="2025-09-10T00:37:25.844901134Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 10 00:37:26.566138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3920435193.mount: Deactivated successfully. Sep 10 00:37:29.648203 containerd[1471]: time="2025-09-10T00:37:29.648079021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:29.649227 containerd[1471]: time="2025-09-10T00:37:29.649178555Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 10 00:37:29.650851 containerd[1471]: time="2025-09-10T00:37:29.650803999Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:29.654444 containerd[1471]: time="2025-09-10T00:37:29.654399637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:29.655853 containerd[1471]: time="2025-09-10T00:37:29.655814956Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.810875911s" Sep 10 00:37:29.655910 containerd[1471]: time="2025-09-10T00:37:29.655854046Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 10 00:37:30.179265 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:37:30.191357 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:37:30.373682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:37:30.387542 (kubelet)[2046]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:37:30.429172 kubelet[2046]: E0910 00:37:30.429019 2046 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:37:30.433997 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:37:30.434247 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:37:32.428436 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:37:32.439391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:37:32.469898 systemd[1]: Reloading requested from client PID 2062 ('systemctl') (unit session-7.scope)... Sep 10 00:37:32.469920 systemd[1]: Reloading... Sep 10 00:37:32.572002 zram_generator::config[2104]: No configuration found. Sep 10 00:37:33.274632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:37:33.353638 systemd[1]: Reloading finished in 883 ms. Sep 10 00:37:33.409079 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 00:37:33.409190 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 00:37:33.409469 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:37:33.411867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:37:33.593151 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:37:33.598921 (kubelet)[2150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:37:33.703057 kubelet[2150]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:37:33.703057 kubelet[2150]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:37:33.703057 kubelet[2150]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:37:33.703633 kubelet[2150]: I0910 00:37:33.703168 2150 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:37:34.005464 kubelet[2150]: I0910 00:37:34.005274 2150 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 00:37:34.005464 kubelet[2150]: I0910 00:37:34.005321 2150 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:37:34.005725 kubelet[2150]: I0910 00:37:34.005700 2150 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 00:37:34.034806 kubelet[2150]: E0910 00:37:34.034720 2150 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:34.036539 kubelet[2150]: I0910 00:37:34.036505 2150 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:37:34.049274 kubelet[2150]: E0910 00:37:34.049214 2150 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:37:34.049274 kubelet[2150]: I0910 00:37:34.049263 2150 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:37:34.055237 kubelet[2150]: I0910 00:37:34.055197 2150 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:37:34.055551 kubelet[2150]: I0910 00:37:34.055503 2150 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:37:34.055766 kubelet[2150]: I0910 00:37:34.055539 2150 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:37:34.055948 kubelet[2150]: I0910 00:37:34.055782 2150 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:37:34.055948 kubelet[2150]: I0910 00:37:34.055793 2150 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 00:37:34.056009 kubelet[2150]: I0910 00:37:34.055994 2150 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:37:34.060762 kubelet[2150]: I0910 00:37:34.060714 2150 kubelet.go:446] "Attempting to sync node with API server" Sep 10 00:37:34.060762 kubelet[2150]: I0910 00:37:34.060756 2150 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:37:34.060867 kubelet[2150]: I0910 00:37:34.060787 2150 kubelet.go:352] "Adding apiserver pod source" Sep 10 00:37:34.060867 kubelet[2150]: I0910 00:37:34.060805 2150 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:37:34.064827 kubelet[2150]: I0910 00:37:34.064786 2150 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:37:34.065263 kubelet[2150]: I0910 00:37:34.065244 2150 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:37:34.066804 kubelet[2150]: W0910 00:37:34.066748 2150 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:37:34.070700 kubelet[2150]: W0910 00:37:34.070502 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:34.070700 kubelet[2150]: E0910 00:37:34.070564 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:34.071038 kubelet[2150]: W0910 00:37:34.070982 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:34.071078 kubelet[2150]: E0910 00:37:34.071061 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:34.073038 kubelet[2150]: I0910 00:37:34.072987 2150 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:37:34.073227 kubelet[2150]: I0910 00:37:34.073060 2150 server.go:1287] "Started kubelet" Sep 10 00:37:34.074318 kubelet[2150]: I0910 00:37:34.073284 2150 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:37:34.074882 kubelet[2150]: I0910 00:37:34.074848 2150 server.go:479] "Adding debug handlers to kubelet server" Sep 10 00:37:34.077045 kubelet[2150]: I0910 00:37:34.077003 2150 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:37:34.077110 kubelet[2150]: I0910 00:37:34.077018 2150 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:37:34.078850 kubelet[2150]: I0910 00:37:34.077439 2150 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:37:34.078850 kubelet[2150]: I0910 00:37:34.077917 2150 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:37:34.078850 kubelet[2150]: E0910 00:37:34.078159 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:34.078850 kubelet[2150]: I0910 00:37:34.078199 2150 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:37:34.078850 kubelet[2150]: I0910 00:37:34.078378 2150 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:37:34.078850 kubelet[2150]: I0910 00:37:34.078459 2150 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:37:34.082144 kubelet[2150]: W0910 00:37:34.080302 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:34.082144 kubelet[2150]: E0910 00:37:34.080368 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:34.082144 kubelet[2150]: E0910 00:37:34.081295 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Sep 10 00:37:34.084826 kubelet[2150]: E0910 00:37:34.084796 2150 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:37:34.085499 kubelet[2150]: I0910 00:37:34.085448 2150 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:37:34.086997 kubelet[2150]: I0910 00:37:34.086961 2150 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:37:34.086997 kubelet[2150]: I0910 00:37:34.086986 2150 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:37:34.090281 kubelet[2150]: E0910 00:37:34.084327 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c4ca9b53a989 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:37:34.073014665 +0000 UTC m=+0.435094489,LastTimestamp:2025-09-10 00:37:34.073014665 +0000 UTC m=+0.435094489,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:37:34.103350 kubelet[2150]: I0910 00:37:34.103308 2150 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:37:34.103350 kubelet[2150]: I0910 00:37:34.103337 2150 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:37:34.103526 kubelet[2150]: I0910 00:37:34.103376 2150 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:37:34.106361 kubelet[2150]: I0910 00:37:34.106303 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:37:34.108904 kubelet[2150]: I0910 00:37:34.108482 2150 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:37:34.108904 kubelet[2150]: I0910 00:37:34.108569 2150 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 00:37:34.108904 kubelet[2150]: I0910 00:37:34.108621 2150 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:37:34.108904 kubelet[2150]: I0910 00:37:34.108643 2150 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 00:37:34.108904 kubelet[2150]: E0910 00:37:34.108723 2150 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:37:34.109470 kubelet[2150]: W0910 00:37:34.109433 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:34.109527 kubelet[2150]: E0910 00:37:34.109475 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:34.178761 kubelet[2150]: E0910 00:37:34.178685 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:34.209264 kubelet[2150]: E0910 00:37:34.209174 2150 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:37:34.279988 kubelet[2150]: E0910 00:37:34.279716 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:34.282652 kubelet[2150]: E0910 00:37:34.282609 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Sep 10 00:37:34.379971 kubelet[2150]: E0910 00:37:34.379898 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:34.410371 kubelet[2150]: E0910 00:37:34.410278 2150 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:37:34.481042 kubelet[2150]: E0910 00:37:34.480929 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:34.581338 kubelet[2150]: E0910 00:37:34.581155 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:34.681839 kubelet[2150]: E0910 00:37:34.681767 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:34.683498 kubelet[2150]: E0910 00:37:34.683455 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Sep 10 00:37:34.766885 kubelet[2150]: I0910 00:37:34.766792 2150 policy_none.go:49] "None policy: Start" Sep 10 00:37:34.766885 kubelet[2150]: I0910 00:37:34.766862 2150 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:37:34.766885 kubelet[2150]: I0910 00:37:34.766909 2150 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:37:34.777093 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 00:37:34.785864 kubelet[2150]: E0910 00:37:34.782154 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:34.790723 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 00:37:34.794853 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 00:37:34.806431 kubelet[2150]: I0910 00:37:34.806376 2150 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:37:34.806729 kubelet[2150]: I0910 00:37:34.806689 2150 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:37:34.806832 kubelet[2150]: I0910 00:37:34.806741 2150 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:37:34.807088 kubelet[2150]: I0910 00:37:34.807051 2150 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:37:34.808097 kubelet[2150]: E0910 00:37:34.808011 2150 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:37:34.808097 kubelet[2150]: E0910 00:37:34.808077 2150 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:37:34.820482 systemd[1]: Created slice kubepods-burstable-pod3178f8c603c7752c125e7f8a2c164c67.slice - libcontainer container kubepods-burstable-pod3178f8c603c7752c125e7f8a2c164c67.slice. Sep 10 00:37:34.832205 kubelet[2150]: E0910 00:37:34.832074 2150 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:37:34.835625 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 10 00:37:34.846672 kubelet[2150]: E0910 00:37:34.846626 2150 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:37:34.850156 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 10 00:37:34.852279 kubelet[2150]: E0910 00:37:34.852244 2150 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:37:34.883642 kubelet[2150]: I0910 00:37:34.882811 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3178f8c603c7752c125e7f8a2c164c67-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3178f8c603c7752c125e7f8a2c164c67\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:34.883642 kubelet[2150]: I0910 00:37:34.882870 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:34.883642 kubelet[2150]: I0910 00:37:34.882896 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:34.883642 kubelet[2150]: I0910 00:37:34.882917 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:34.883642 kubelet[2150]: I0910 00:37:34.882936 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3178f8c603c7752c125e7f8a2c164c67-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3178f8c603c7752c125e7f8a2c164c67\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:34.884005 kubelet[2150]: I0910 00:37:34.882956 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3178f8c603c7752c125e7f8a2c164c67-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3178f8c603c7752c125e7f8a2c164c67\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:34.884005 kubelet[2150]: I0910 00:37:34.882972 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:34.884005 kubelet[2150]: I0910 00:37:34.882991 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:34.884005 kubelet[2150]: I0910 00:37:34.883008 2150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:37:34.909433 kubelet[2150]: I0910 00:37:34.909373 2150 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:37:34.909886 kubelet[2150]: E0910 00:37:34.909847 2150 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Sep 10 00:37:35.026536 kubelet[2150]: W0910 00:37:35.026485 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:35.026715 kubelet[2150]: E0910 00:37:35.026545 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:35.067631 kubelet[2150]: W0910 00:37:35.067570 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:35.067700 kubelet[2150]: E0910 00:37:35.067647 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:35.111819 kubelet[2150]: I0910 00:37:35.111614 2150 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:37:35.112072 kubelet[2150]: E0910 00:37:35.112028 2150 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Sep 10 00:37:35.133543 kubelet[2150]: E0910 00:37:35.133472 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:35.134795 containerd[1471]: time="2025-09-10T00:37:35.134705561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3178f8c603c7752c125e7f8a2c164c67,Namespace:kube-system,Attempt:0,}" Sep 10 00:37:35.148023 kubelet[2150]: E0910 00:37:35.147973 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:35.148887 containerd[1471]: time="2025-09-10T00:37:35.148818761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 10 00:37:35.153134 kubelet[2150]: E0910 00:37:35.153069 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:35.153554 containerd[1471]: time="2025-09-10T00:37:35.153510662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 10 00:37:35.313537 kubelet[2150]: W0910 00:37:35.313467 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:35.313537 kubelet[2150]: E0910 00:37:35.313537 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:35.440273 kubelet[2150]: W0910 00:37:35.440079 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:35.440273 kubelet[2150]: E0910 00:37:35.440195 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:35.484465 kubelet[2150]: E0910 00:37:35.484343 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="1.6s" Sep 10 00:37:35.513850 kubelet[2150]: I0910 00:37:35.513792 2150 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:37:35.514179 kubelet[2150]: E0910 00:37:35.514147 2150 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Sep 10 00:37:36.116828 kubelet[2150]: E0910 00:37:36.116781 2150 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:36.315979 kubelet[2150]: I0910 00:37:36.315932 2150 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:37:36.316496 kubelet[2150]: E0910 00:37:36.316447 2150 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Sep 10 00:37:36.688948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3736414429.mount: Deactivated successfully. Sep 10 00:37:37.085576 kubelet[2150]: E0910 00:37:37.085396 2150 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="3.2s" Sep 10 00:37:37.173692 containerd[1471]: time="2025-09-10T00:37:37.173614572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:37:37.179346 containerd[1471]: time="2025-09-10T00:37:37.179252637Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 10 00:37:37.180531 containerd[1471]: time="2025-09-10T00:37:37.180487020Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:37:37.183701 containerd[1471]: time="2025-09-10T00:37:37.183649740Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:37:37.185778 containerd[1471]: time="2025-09-10T00:37:37.185712207Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:37:37.188045 containerd[1471]: time="2025-09-10T00:37:37.187959891Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:37:37.190352 containerd[1471]: time="2025-09-10T00:37:37.190303433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:37:37.192814 containerd[1471]: time="2025-09-10T00:37:37.192748951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:37:37.194929 containerd[1471]: time="2025-09-10T00:37:37.194844162Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.045864986s" Sep 10 00:37:37.195649 containerd[1471]: time="2025-09-10T00:37:37.195595634Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.060755093s" Sep 10 00:37:37.201765 containerd[1471]: time="2025-09-10T00:37:37.201693975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.048090661s" Sep 10 00:37:37.226923 kubelet[2150]: W0910 00:37:37.226858 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:37.226923 kubelet[2150]: E0910 00:37:37.226914 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:37.528601 kubelet[2150]: W0910 00:37:37.526997 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:37.528601 kubelet[2150]: W0910 00:37:37.526993 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:37.528601 kubelet[2150]: E0910 00:37:37.527146 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:37.528601 kubelet[2150]: E0910 00:37:37.527080 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:37.587016 kubelet[2150]: W0910 00:37:37.586955 2150 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Sep 10 00:37:37.587016 kubelet[2150]: E0910 00:37:37.587005 2150 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Sep 10 00:37:37.711174 kubelet[2150]: E0910 00:37:37.698107 2150 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c4ca9b53a989 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:37:34.073014665 +0000 UTC m=+0.435094489,LastTimestamp:2025-09-10 00:37:34.073014665 +0000 UTC m=+0.435094489,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:37:37.733888 containerd[1471]: time="2025-09-10T00:37:37.728675080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:37:37.733888 containerd[1471]: time="2025-09-10T00:37:37.728807364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:37:37.733888 containerd[1471]: time="2025-09-10T00:37:37.728841052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:37.733888 containerd[1471]: time="2025-09-10T00:37:37.728960369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:37.860980 containerd[1471]: time="2025-09-10T00:37:37.743303660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:37:37.860980 containerd[1471]: time="2025-09-10T00:37:37.743407692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:37:37.860980 containerd[1471]: time="2025-09-10T00:37:37.743428593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:37.860980 containerd[1471]: time="2025-09-10T00:37:37.743531782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:37.863040 containerd[1471]: time="2025-09-10T00:37:37.862890354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:37:37.863800 containerd[1471]: time="2025-09-10T00:37:37.863743560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:37:37.863921 containerd[1471]: time="2025-09-10T00:37:37.863877369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:37.864455 containerd[1471]: time="2025-09-10T00:37:37.864409052Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:37.919440 kubelet[2150]: I0910 00:37:37.918914 2150 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:37:37.919440 kubelet[2150]: E0910 00:37:37.919329 2150 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Sep 10 00:37:38.004614 systemd[1]: Started cri-containerd-930f8f1dc6f29a102850ffd064e9d97ae9ddcee723e0ca204d11991aaa3d141f.scope - libcontainer container 930f8f1dc6f29a102850ffd064e9d97ae9ddcee723e0ca204d11991aaa3d141f. Sep 10 00:37:38.014141 systemd[1]: Started cri-containerd-e4f0521345df0083a5a72c02fe3d97b164bfd5f98050c082f0358b9fb8d77809.scope - libcontainer container e4f0521345df0083a5a72c02fe3d97b164bfd5f98050c082f0358b9fb8d77809. Sep 10 00:37:38.067490 systemd[1]: Started cri-containerd-57b41a22389664c68edf2e93958ec5a1df6a29479d2baf81a901f4570dfa0bba.scope - libcontainer container 57b41a22389664c68edf2e93958ec5a1df6a29479d2baf81a901f4570dfa0bba. Sep 10 00:37:38.128210 containerd[1471]: time="2025-09-10T00:37:38.128036980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4f0521345df0083a5a72c02fe3d97b164bfd5f98050c082f0358b9fb8d77809\"" Sep 10 00:37:38.129532 kubelet[2150]: E0910 00:37:38.129482 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:38.132724 containerd[1471]: time="2025-09-10T00:37:38.132667352Z" level=info msg="CreateContainer within sandbox \"e4f0521345df0083a5a72c02fe3d97b164bfd5f98050c082f0358b9fb8d77809\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:37:38.134563 containerd[1471]: time="2025-09-10T00:37:38.134508679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"930f8f1dc6f29a102850ffd064e9d97ae9ddcee723e0ca204d11991aaa3d141f\"" Sep 10 00:37:38.134704 containerd[1471]: time="2025-09-10T00:37:38.134654593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3178f8c603c7752c125e7f8a2c164c67,Namespace:kube-system,Attempt:0,} returns sandbox id \"57b41a22389664c68edf2e93958ec5a1df6a29479d2baf81a901f4570dfa0bba\"" Sep 10 00:37:38.135684 kubelet[2150]: E0910 00:37:38.135658 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:38.135758 kubelet[2150]: E0910 00:37:38.135691 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:38.137713 containerd[1471]: time="2025-09-10T00:37:38.137665308Z" level=info msg="CreateContainer within sandbox \"57b41a22389664c68edf2e93958ec5a1df6a29479d2baf81a901f4570dfa0bba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:37:38.137906 containerd[1471]: time="2025-09-10T00:37:38.137864132Z" level=info msg="CreateContainer within sandbox \"930f8f1dc6f29a102850ffd064e9d97ae9ddcee723e0ca204d11991aaa3d141f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:37:38.802643 containerd[1471]: time="2025-09-10T00:37:38.802540243Z" level=info msg="CreateContainer within sandbox \"e4f0521345df0083a5a72c02fe3d97b164bfd5f98050c082f0358b9fb8d77809\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c7b5ffb8cd3db0c7fa0bd2afaeec0585c6ab52a45dab109576da68c23edce2c\"" Sep 10 00:37:38.803465 containerd[1471]: time="2025-09-10T00:37:38.803410523Z" level=info msg="StartContainer for \"7c7b5ffb8cd3db0c7fa0bd2afaeec0585c6ab52a45dab109576da68c23edce2c\"" Sep 10 00:37:38.821613 containerd[1471]: time="2025-09-10T00:37:38.821551765Z" level=info msg="CreateContainer within sandbox \"57b41a22389664c68edf2e93958ec5a1df6a29479d2baf81a901f4570dfa0bba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"259f38828f36206485b7cd31dbdc37e1fa58f0fb2eb287d4ef8cf9495e342dea\"" Sep 10 00:37:38.822245 containerd[1471]: time="2025-09-10T00:37:38.822181993Z" level=info msg="StartContainer for \"259f38828f36206485b7cd31dbdc37e1fa58f0fb2eb287d4ef8cf9495e342dea\"" Sep 10 00:37:38.834081 containerd[1471]: time="2025-09-10T00:37:38.833978757Z" level=info msg="CreateContainer within sandbox \"930f8f1dc6f29a102850ffd064e9d97ae9ddcee723e0ca204d11991aaa3d141f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"43855188115a70713a19d50503296bf6f891bcdd0af782bcf055ed3038cab76d\"" Sep 10 00:37:38.835692 containerd[1471]: time="2025-09-10T00:37:38.834382995Z" level=info msg="StartContainer for \"43855188115a70713a19d50503296bf6f891bcdd0af782bcf055ed3038cab76d\"" Sep 10 00:37:38.886423 systemd[1]: Started cri-containerd-7c7b5ffb8cd3db0c7fa0bd2afaeec0585c6ab52a45dab109576da68c23edce2c.scope - libcontainer container 7c7b5ffb8cd3db0c7fa0bd2afaeec0585c6ab52a45dab109576da68c23edce2c. Sep 10 00:37:38.917272 systemd[1]: Started cri-containerd-43855188115a70713a19d50503296bf6f891bcdd0af782bcf055ed3038cab76d.scope - libcontainer container 43855188115a70713a19d50503296bf6f891bcdd0af782bcf055ed3038cab76d. Sep 10 00:37:38.921375 systemd[1]: Started cri-containerd-259f38828f36206485b7cd31dbdc37e1fa58f0fb2eb287d4ef8cf9495e342dea.scope - libcontainer container 259f38828f36206485b7cd31dbdc37e1fa58f0fb2eb287d4ef8cf9495e342dea. Sep 10 00:37:38.961144 containerd[1471]: time="2025-09-10T00:37:38.961040488Z" level=info msg="StartContainer for \"7c7b5ffb8cd3db0c7fa0bd2afaeec0585c6ab52a45dab109576da68c23edce2c\" returns successfully" Sep 10 00:37:38.993529 containerd[1471]: time="2025-09-10T00:37:38.993461331Z" level=info msg="StartContainer for \"43855188115a70713a19d50503296bf6f891bcdd0af782bcf055ed3038cab76d\" returns successfully" Sep 10 00:37:39.007763 containerd[1471]: time="2025-09-10T00:37:39.007685942Z" level=info msg="StartContainer for \"259f38828f36206485b7cd31dbdc37e1fa58f0fb2eb287d4ef8cf9495e342dea\" returns successfully" Sep 10 00:37:39.127574 kubelet[2150]: E0910 00:37:39.127079 2150 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:37:39.129020 kubelet[2150]: E0910 00:37:39.128255 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:39.129020 kubelet[2150]: E0910 00:37:39.128432 2150 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:37:39.129020 kubelet[2150]: E0910 00:37:39.128552 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:39.129020 kubelet[2150]: E0910 00:37:39.128832 2150 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:37:39.129020 kubelet[2150]: E0910 00:37:39.128929 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:40.130688 kubelet[2150]: E0910 00:37:40.130645 2150 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:37:40.130688 kubelet[2150]: E0910 00:37:40.130670 2150 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:37:40.131494 kubelet[2150]: E0910 00:37:40.130810 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:40.131494 kubelet[2150]: E0910 00:37:40.130810 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:40.521662 kubelet[2150]: E0910 00:37:40.521512 2150 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:37:41.079050 kubelet[2150]: E0910 00:37:41.078971 2150 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 10 00:37:41.121150 kubelet[2150]: I0910 00:37:41.121068 2150 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:37:41.131419 kubelet[2150]: E0910 00:37:41.131381 2150 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:37:41.131887 kubelet[2150]: E0910 00:37:41.131551 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:41.133358 kubelet[2150]: I0910 00:37:41.133317 2150 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:37:41.133358 kubelet[2150]: E0910 00:37:41.133349 2150 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 00:37:41.146396 kubelet[2150]: E0910 00:37:41.146347 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:41.247442 kubelet[2150]: E0910 00:37:41.247357 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:41.348453 kubelet[2150]: E0910 00:37:41.348249 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:41.449211 kubelet[2150]: E0910 00:37:41.449100 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:41.550340 kubelet[2150]: E0910 00:37:41.550274 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:41.651522 kubelet[2150]: E0910 00:37:41.651356 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:41.752543 kubelet[2150]: E0910 00:37:41.752482 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:41.853515 kubelet[2150]: E0910 00:37:41.853424 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:41.954667 kubelet[2150]: E0910 00:37:41.954507 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:42.055606 kubelet[2150]: E0910 00:37:42.055555 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:42.156448 kubelet[2150]: E0910 00:37:42.156385 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:42.257373 kubelet[2150]: E0910 00:37:42.257212 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:42.358200 kubelet[2150]: E0910 00:37:42.358069 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:42.459066 kubelet[2150]: E0910 00:37:42.458999 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:42.560207 kubelet[2150]: E0910 00:37:42.560025 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:42.660499 kubelet[2150]: E0910 00:37:42.660408 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:42.761577 kubelet[2150]: E0910 00:37:42.761495 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:42.861815 kubelet[2150]: E0910 00:37:42.861701 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:42.962882 kubelet[2150]: E0910 00:37:42.962820 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:43.063638 kubelet[2150]: E0910 00:37:43.063528 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:43.164326 kubelet[2150]: E0910 00:37:43.163871 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:43.264835 kubelet[2150]: E0910 00:37:43.264772 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:43.365635 kubelet[2150]: E0910 00:37:43.365576 2150 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:43.481421 kubelet[2150]: I0910 00:37:43.481217 2150 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:37:43.492025 kubelet[2150]: I0910 00:37:43.491972 2150 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:43.502324 kubelet[2150]: I0910 00:37:43.502253 2150 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:44.053330 systemd[1]: Reloading requested from client PID 2434 ('systemctl') (unit session-7.scope)... Sep 10 00:37:44.053348 systemd[1]: Reloading... Sep 10 00:37:44.072096 kubelet[2150]: I0910 00:37:44.072038 2150 apiserver.go:52] "Watching apiserver" Sep 10 00:37:44.074848 kubelet[2150]: E0910 00:37:44.074252 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:44.074848 kubelet[2150]: E0910 00:37:44.074620 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:44.074978 kubelet[2150]: E0910 00:37:44.074957 2150 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:44.079571 kubelet[2150]: I0910 00:37:44.079536 2150 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:37:44.133025 kubelet[2150]: I0910 00:37:44.132932 2150 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.132869881 podStartE2EDuration="1.132869881s" podCreationTimestamp="2025-09-10 00:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:37:44.132598004 +0000 UTC m=+10.494677828" watchObservedRunningTime="2025-09-10 00:37:44.132869881 +0000 UTC m=+10.494949705" Sep 10 00:37:44.142379 kubelet[2150]: I0910 00:37:44.142189 2150 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.142169604 podStartE2EDuration="1.142169604s" podCreationTimestamp="2025-09-10 00:37:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:37:44.142030993 +0000 UTC m=+10.504110817" watchObservedRunningTime="2025-09-10 00:37:44.142169604 +0000 UTC m=+10.504249428" Sep 10 00:37:44.168150 zram_generator::config[2476]: No configuration found. Sep 10 00:37:44.299760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:37:44.409369 systemd[1]: Reloading finished in 355 ms. Sep 10 00:37:44.454257 kubelet[2150]: I0910 00:37:44.454191 2150 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:37:44.454402 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:37:44.475438 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:37:44.475872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:37:44.475958 systemd[1]: kubelet.service: Consumed 1.111s CPU time, 132.0M memory peak, 0B memory swap peak. Sep 10 00:37:44.490506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:37:44.676578 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:37:44.682957 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:37:44.726734 kubelet[2518]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:37:44.726734 kubelet[2518]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:37:44.726734 kubelet[2518]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:37:44.727282 kubelet[2518]: I0910 00:37:44.726801 2518 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:37:44.734921 kubelet[2518]: I0910 00:37:44.734879 2518 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 00:37:44.734921 kubelet[2518]: I0910 00:37:44.734907 2518 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:37:44.735195 kubelet[2518]: I0910 00:37:44.735177 2518 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 00:37:44.736398 kubelet[2518]: I0910 00:37:44.736371 2518 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 00:37:44.739648 kubelet[2518]: I0910 00:37:44.739602 2518 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:37:44.742645 kubelet[2518]: E0910 00:37:44.742615 2518 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:37:44.742645 kubelet[2518]: I0910 00:37:44.742644 2518 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:37:44.750356 kubelet[2518]: I0910 00:37:44.750321 2518 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:37:44.750610 kubelet[2518]: I0910 00:37:44.750582 2518 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:37:44.750775 kubelet[2518]: I0910 00:37:44.750607 2518 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:37:44.750873 kubelet[2518]: I0910 00:37:44.750782 2518 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:37:44.750873 kubelet[2518]: I0910 00:37:44.750792 2518 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 00:37:44.750873 kubelet[2518]: I0910 00:37:44.750847 2518 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:37:44.751025 kubelet[2518]: I0910 00:37:44.751005 2518 kubelet.go:446] "Attempting to sync node with API server" Sep 10 00:37:44.751070 kubelet[2518]: I0910 00:37:44.751046 2518 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:37:44.751106 kubelet[2518]: I0910 00:37:44.751073 2518 kubelet.go:352] "Adding apiserver pod source" Sep 10 00:37:44.751106 kubelet[2518]: I0910 00:37:44.751085 2518 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:37:44.751997 kubelet[2518]: I0910 00:37:44.751973 2518 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:37:44.752476 kubelet[2518]: I0910 00:37:44.752441 2518 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 00:37:44.753023 kubelet[2518]: I0910 00:37:44.752989 2518 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:37:44.753023 kubelet[2518]: I0910 00:37:44.753024 2518 server.go:1287] "Started kubelet" Sep 10 00:37:44.754178 kubelet[2518]: I0910 00:37:44.754080 2518 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:37:44.754667 kubelet[2518]: I0910 00:37:44.754615 2518 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:37:44.755045 kubelet[2518]: I0910 00:37:44.755026 2518 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:37:44.755362 kubelet[2518]: I0910 00:37:44.755325 2518 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:37:44.758360 kubelet[2518]: I0910 00:37:44.758340 2518 server.go:479] "Adding debug handlers to kubelet server" Sep 10 00:37:44.760274 kubelet[2518]: I0910 00:37:44.760060 2518 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:37:44.762583 kubelet[2518]: E0910 00:37:44.762298 2518 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:37:44.762583 kubelet[2518]: I0910 00:37:44.762341 2518 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:37:44.762583 kubelet[2518]: I0910 00:37:44.762484 2518 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:37:44.762709 kubelet[2518]: I0910 00:37:44.762619 2518 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:37:44.763705 kubelet[2518]: I0910 00:37:44.763685 2518 factory.go:221] Registration of the systemd container factory successfully Sep 10 00:37:44.763817 kubelet[2518]: I0910 00:37:44.763793 2518 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:37:44.771021 kubelet[2518]: I0910 00:37:44.770929 2518 factory.go:221] Registration of the containerd container factory successfully Sep 10 00:37:44.776152 kubelet[2518]: E0910 00:37:44.776104 2518 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:37:44.777898 kubelet[2518]: I0910 00:37:44.777863 2518 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 00:37:44.779419 kubelet[2518]: I0910 00:37:44.779387 2518 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 00:37:44.779419 kubelet[2518]: I0910 00:37:44.779420 2518 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 00:37:44.779490 kubelet[2518]: I0910 00:37:44.779441 2518 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:37:44.779490 kubelet[2518]: I0910 00:37:44.779448 2518 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 00:37:44.779543 kubelet[2518]: E0910 00:37:44.779494 2518 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:37:44.816525 kubelet[2518]: I0910 00:37:44.816480 2518 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:37:44.816525 kubelet[2518]: I0910 00:37:44.816504 2518 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:37:44.816525 kubelet[2518]: I0910 00:37:44.816534 2518 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:37:44.816794 kubelet[2518]: I0910 00:37:44.816776 2518 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:37:44.816835 kubelet[2518]: I0910 00:37:44.816794 2518 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:37:44.816835 kubelet[2518]: I0910 00:37:44.816820 2518 policy_none.go:49] "None policy: Start" Sep 10 00:37:44.816879 kubelet[2518]: I0910 00:37:44.816837 2518 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:37:44.816879 kubelet[2518]: I0910 00:37:44.816852 2518 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:37:44.817020 kubelet[2518]: I0910 00:37:44.817005 2518 state_mem.go:75] "Updated machine memory state" Sep 10 00:37:44.821877 kubelet[2518]: I0910 00:37:44.821784 2518 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 00:37:44.822037 kubelet[2518]: I0910 00:37:44.822014 2518 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:37:44.822088 kubelet[2518]: I0910 00:37:44.822031 2518 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:37:44.822361 kubelet[2518]: I0910 00:37:44.822316 2518 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:37:44.823536 kubelet[2518]: E0910 00:37:44.823479 2518 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:37:44.880492 kubelet[2518]: I0910 00:37:44.880408 2518 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:44.880905 kubelet[2518]: I0910 00:37:44.880596 2518 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:37:44.880905 kubelet[2518]: I0910 00:37:44.880748 2518 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:44.928665 kubelet[2518]: I0910 00:37:44.928530 2518 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:37:44.964268 kubelet[2518]: I0910 00:37:44.964184 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:44.964268 kubelet[2518]: I0910 00:37:44.964258 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:44.964474 kubelet[2518]: I0910 00:37:44.964292 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:37:44.964474 kubelet[2518]: I0910 00:37:44.964309 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:44.964474 kubelet[2518]: I0910 00:37:44.964326 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3178f8c603c7752c125e7f8a2c164c67-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3178f8c603c7752c125e7f8a2c164c67\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:44.964474 kubelet[2518]: I0910 00:37:44.964345 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3178f8c603c7752c125e7f8a2c164c67-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3178f8c603c7752c125e7f8a2c164c67\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:44.964474 kubelet[2518]: I0910 00:37:44.964422 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:44.964650 kubelet[2518]: I0910 00:37:44.964498 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:44.964650 kubelet[2518]: I0910 00:37:44.964539 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3178f8c603c7752c125e7f8a2c164c67-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3178f8c603c7752c125e7f8a2c164c67\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:45.112624 kubelet[2518]: E0910 00:37:45.112559 2518 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 00:37:45.112906 kubelet[2518]: E0910 00:37:45.112767 2518 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:37:45.112906 kubelet[2518]: E0910 00:37:45.112790 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:45.113146 kubelet[2518]: E0910 00:37:45.112830 2518 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:37:45.113883 kubelet[2518]: E0910 00:37:45.113518 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:45.113883 kubelet[2518]: E0910 00:37:45.113744 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:45.212684 kubelet[2518]: I0910 00:37:45.212516 2518 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 00:37:45.212684 kubelet[2518]: I0910 00:37:45.212625 2518 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:37:45.751832 kubelet[2518]: I0910 00:37:45.751758 2518 apiserver.go:52] "Watching apiserver" Sep 10 00:37:45.763497 kubelet[2518]: I0910 00:37:45.763456 2518 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:37:45.794165 kubelet[2518]: E0910 00:37:45.793742 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:45.794165 kubelet[2518]: E0910 00:37:45.793835 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:45.794165 kubelet[2518]: E0910 00:37:45.793754 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:46.795772 kubelet[2518]: E0910 00:37:46.795726 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:46.796565 kubelet[2518]: E0910 00:37:46.796539 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:47.798621 kubelet[2518]: E0910 00:37:47.798581 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:48.588509 kubelet[2518]: E0910 00:37:48.588467 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:48.632080 kubelet[2518]: I0910 00:37:48.632030 2518 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:37:48.632394 containerd[1471]: time="2025-09-10T00:37:48.632359625Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:37:48.632837 kubelet[2518]: I0910 00:37:48.632575 2518 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:37:48.800058 kubelet[2518]: E0910 00:37:48.800023 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:49.399086 systemd[1]: Created slice kubepods-besteffort-podbbe4f8f5_fe58_4cc8_99b8_c9bdcd1f111a.slice - libcontainer container kubepods-besteffort-podbbe4f8f5_fe58_4cc8_99b8_c9bdcd1f111a.slice. Sep 10 00:37:49.490579 kubelet[2518]: I0910 00:37:49.490501 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bbe4f8f5-fe58-4cc8-99b8-c9bdcd1f111a-xtables-lock\") pod \"kube-proxy-fbfdg\" (UID: \"bbe4f8f5-fe58-4cc8-99b8-c9bdcd1f111a\") " pod="kube-system/kube-proxy-fbfdg" Sep 10 00:37:49.490579 kubelet[2518]: I0910 00:37:49.490567 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bbe4f8f5-fe58-4cc8-99b8-c9bdcd1f111a-kube-proxy\") pod \"kube-proxy-fbfdg\" (UID: \"bbe4f8f5-fe58-4cc8-99b8-c9bdcd1f111a\") " pod="kube-system/kube-proxy-fbfdg" Sep 10 00:37:49.490823 kubelet[2518]: I0910 00:37:49.490600 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bbe4f8f5-fe58-4cc8-99b8-c9bdcd1f111a-lib-modules\") pod \"kube-proxy-fbfdg\" (UID: \"bbe4f8f5-fe58-4cc8-99b8-c9bdcd1f111a\") " pod="kube-system/kube-proxy-fbfdg" Sep 10 00:37:49.490823 kubelet[2518]: I0910 00:37:49.490703 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thv4v\" (UniqueName: \"kubernetes.io/projected/bbe4f8f5-fe58-4cc8-99b8-c9bdcd1f111a-kube-api-access-thv4v\") pod \"kube-proxy-fbfdg\" (UID: \"bbe4f8f5-fe58-4cc8-99b8-c9bdcd1f111a\") " pod="kube-system/kube-proxy-fbfdg" Sep 10 00:37:49.802030 kubelet[2518]: E0910 00:37:49.801881 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:49.937033 systemd[1]: Created slice kubepods-besteffort-pod65422705_400f_47b2_82d2_399752e8af6e.slice - libcontainer container kubepods-besteffort-pod65422705_400f_47b2_82d2_399752e8af6e.slice. Sep 10 00:37:49.994203 kubelet[2518]: I0910 00:37:49.994160 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65422705-400f-47b2-82d2-399752e8af6e-var-lib-calico\") pod \"tigera-operator-755d956888-95wgb\" (UID: \"65422705-400f-47b2-82d2-399752e8af6e\") " pod="tigera-operator/tigera-operator-755d956888-95wgb" Sep 10 00:37:49.994203 kubelet[2518]: I0910 00:37:49.994197 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxppf\" (UniqueName: \"kubernetes.io/projected/65422705-400f-47b2-82d2-399752e8af6e-kube-api-access-jxppf\") pod \"tigera-operator-755d956888-95wgb\" (UID: \"65422705-400f-47b2-82d2-399752e8af6e\") " pod="tigera-operator/tigera-operator-755d956888-95wgb" Sep 10 00:37:50.009453 kubelet[2518]: E0910 00:37:50.009418 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:50.010583 containerd[1471]: time="2025-09-10T00:37:50.010546658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fbfdg,Uid:bbe4f8f5-fe58-4cc8-99b8-c9bdcd1f111a,Namespace:kube-system,Attempt:0,}" Sep 10 00:37:50.044727 containerd[1471]: time="2025-09-10T00:37:50.044610959Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:37:50.044727 containerd[1471]: time="2025-09-10T00:37:50.044681473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:37:50.044727 containerd[1471]: time="2025-09-10T00:37:50.044696699Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:50.045072 containerd[1471]: time="2025-09-10T00:37:50.044799076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:50.064799 systemd[1]: run-containerd-runc-k8s.io-2e5fb4cd03e0c499a54d10834d95d94ac24b02d408152617cf6968b2b9fbbebf-runc.iqU3wp.mount: Deactivated successfully. Sep 10 00:37:50.075334 systemd[1]: Started cri-containerd-2e5fb4cd03e0c499a54d10834d95d94ac24b02d408152617cf6968b2b9fbbebf.scope - libcontainer container 2e5fb4cd03e0c499a54d10834d95d94ac24b02d408152617cf6968b2b9fbbebf. Sep 10 00:37:50.105467 containerd[1471]: time="2025-09-10T00:37:50.105354013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fbfdg,Uid:bbe4f8f5-fe58-4cc8-99b8-c9bdcd1f111a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e5fb4cd03e0c499a54d10834d95d94ac24b02d408152617cf6968b2b9fbbebf\"" Sep 10 00:37:50.106091 kubelet[2518]: E0910 00:37:50.106062 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:50.108509 containerd[1471]: time="2025-09-10T00:37:50.108472798Z" level=info msg="CreateContainer within sandbox \"2e5fb4cd03e0c499a54d10834d95d94ac24b02d408152617cf6968b2b9fbbebf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:37:50.126381 containerd[1471]: time="2025-09-10T00:37:50.126319543Z" level=info msg="CreateContainer within sandbox \"2e5fb4cd03e0c499a54d10834d95d94ac24b02d408152617cf6968b2b9fbbebf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c8bb8300952b023f8714b72e84dd14127b4ad3e2b26161d5353455b45900cefd\"" Sep 10 00:37:50.126974 containerd[1471]: time="2025-09-10T00:37:50.126943189Z" level=info msg="StartContainer for \"c8bb8300952b023f8714b72e84dd14127b4ad3e2b26161d5353455b45900cefd\"" Sep 10 00:37:50.161293 systemd[1]: Started cri-containerd-c8bb8300952b023f8714b72e84dd14127b4ad3e2b26161d5353455b45900cefd.scope - libcontainer container c8bb8300952b023f8714b72e84dd14127b4ad3e2b26161d5353455b45900cefd. Sep 10 00:37:50.198635 containerd[1471]: time="2025-09-10T00:37:50.198592069Z" level=info msg="StartContainer for \"c8bb8300952b023f8714b72e84dd14127b4ad3e2b26161d5353455b45900cefd\" returns successfully" Sep 10 00:37:50.241607 containerd[1471]: time="2025-09-10T00:37:50.241534804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-95wgb,Uid:65422705-400f-47b2-82d2-399752e8af6e,Namespace:tigera-operator,Attempt:0,}" Sep 10 00:37:50.273793 containerd[1471]: time="2025-09-10T00:37:50.272855783Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:37:50.273793 containerd[1471]: time="2025-09-10T00:37:50.273750488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:37:50.273793 containerd[1471]: time="2025-09-10T00:37:50.273770665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:50.274043 containerd[1471]: time="2025-09-10T00:37:50.273883427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:37:50.299345 systemd[1]: Started cri-containerd-a97d1d95153cabb36c54381c8feb6c3117a9682e2b93f315af59de2c82c8e54d.scope - libcontainer container a97d1d95153cabb36c54381c8feb6c3117a9682e2b93f315af59de2c82c8e54d. Sep 10 00:37:50.345050 containerd[1471]: time="2025-09-10T00:37:50.344896783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-95wgb,Uid:65422705-400f-47b2-82d2-399752e8af6e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a97d1d95153cabb36c54381c8feb6c3117a9682e2b93f315af59de2c82c8e54d\"" Sep 10 00:37:50.347631 containerd[1471]: time="2025-09-10T00:37:50.346808838Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 00:37:50.806627 kubelet[2518]: E0910 00:37:50.806440 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:51.178416 kubelet[2518]: E0910 00:37:51.174417 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:51.190142 kubelet[2518]: I0910 00:37:51.190026 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fbfdg" podStartSLOduration=2.1900051 podStartE2EDuration="2.1900051s" podCreationTimestamp="2025-09-10 00:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:37:50.814544871 +0000 UTC m=+6.126759560" watchObservedRunningTime="2025-09-10 00:37:51.1900051 +0000 UTC m=+6.502219789" Sep 10 00:37:51.273261 update_engine[1457]: I20250910 00:37:51.273082 1457 update_attempter.cc:509] Updating boot flags... Sep 10 00:37:51.301674 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2824) Sep 10 00:37:51.349196 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2825) Sep 10 00:37:51.383148 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2825) Sep 10 00:37:51.808307 kubelet[2518]: E0910 00:37:51.808263 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:52.409285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1877889614.mount: Deactivated successfully. Sep 10 00:37:53.943868 containerd[1471]: time="2025-09-10T00:37:53.943784823Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:53.944916 containerd[1471]: time="2025-09-10T00:37:53.944874546Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 10 00:37:53.945983 containerd[1471]: time="2025-09-10T00:37:53.945940183Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:53.948592 containerd[1471]: time="2025-09-10T00:37:53.948523344Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:37:53.949557 containerd[1471]: time="2025-09-10T00:37:53.949509413Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.602668804s" Sep 10 00:37:53.949608 containerd[1471]: time="2025-09-10T00:37:53.949554575Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 10 00:37:53.952181 containerd[1471]: time="2025-09-10T00:37:53.952145512Z" level=info msg="CreateContainer within sandbox \"a97d1d95153cabb36c54381c8feb6c3117a9682e2b93f315af59de2c82c8e54d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 00:37:53.977272 containerd[1471]: time="2025-09-10T00:37:53.977212045Z" level=info msg="CreateContainer within sandbox \"a97d1d95153cabb36c54381c8feb6c3117a9682e2b93f315af59de2c82c8e54d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c99b8b101081906469fa99b655216d71d371c92c3761bb19929d30012c3fcbbf\"" Sep 10 00:37:53.978027 containerd[1471]: time="2025-09-10T00:37:53.977974304Z" level=info msg="StartContainer for \"c99b8b101081906469fa99b655216d71d371c92c3761bb19929d30012c3fcbbf\"" Sep 10 00:37:54.015252 systemd[1]: Started cri-containerd-c99b8b101081906469fa99b655216d71d371c92c3761bb19929d30012c3fcbbf.scope - libcontainer container c99b8b101081906469fa99b655216d71d371c92c3761bb19929d30012c3fcbbf. Sep 10 00:37:54.047220 containerd[1471]: time="2025-09-10T00:37:54.047147513Z" level=info msg="StartContainer for \"c99b8b101081906469fa99b655216d71d371c92c3761bb19929d30012c3fcbbf\" returns successfully" Sep 10 00:37:57.073620 kubelet[2518]: E0910 00:37:57.073575 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:57.084155 kubelet[2518]: I0910 00:37:57.083988 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-95wgb" podStartSLOduration=4.479671173 podStartE2EDuration="8.083967664s" podCreationTimestamp="2025-09-10 00:37:49 +0000 UTC" firstStartedPulling="2025-09-10 00:37:50.34633591 +0000 UTC m=+5.658550599" lastFinishedPulling="2025-09-10 00:37:53.950632401 +0000 UTC m=+9.262847090" observedRunningTime="2025-09-10 00:37:54.826468281 +0000 UTC m=+10.138682970" watchObservedRunningTime="2025-09-10 00:37:57.083967664 +0000 UTC m=+12.396182363" Sep 10 00:37:57.820302 kubelet[2518]: E0910 00:37:57.820262 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:37:59.692267 sudo[1646]: pam_unix(sudo:session): session closed for user root Sep 10 00:37:59.706098 sshd[1642]: pam_unix(sshd:session): session closed for user core Sep 10 00:37:59.710717 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:56380.service: Deactivated successfully. Sep 10 00:37:59.713688 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:37:59.713951 systemd[1]: session-7.scope: Consumed 5.479s CPU time, 161.2M memory peak, 0B memory swap peak. Sep 10 00:37:59.714811 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:37:59.716172 systemd-logind[1448]: Removed session 7. Sep 10 00:38:03.665151 systemd[1]: Created slice kubepods-besteffort-poda3635681_5879_4163_9c87_23257cb51626.slice - libcontainer container kubepods-besteffort-poda3635681_5879_4163_9c87_23257cb51626.slice. Sep 10 00:38:03.684008 kubelet[2518]: I0910 00:38:03.683925 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmbxg\" (UniqueName: \"kubernetes.io/projected/a3635681-5879-4163-9c87-23257cb51626-kube-api-access-wmbxg\") pod \"calico-typha-c994ff4c8-zkbtp\" (UID: \"a3635681-5879-4163-9c87-23257cb51626\") " pod="calico-system/calico-typha-c994ff4c8-zkbtp" Sep 10 00:38:03.684008 kubelet[2518]: I0910 00:38:03.683999 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a3635681-5879-4163-9c87-23257cb51626-typha-certs\") pod \"calico-typha-c994ff4c8-zkbtp\" (UID: \"a3635681-5879-4163-9c87-23257cb51626\") " pod="calico-system/calico-typha-c994ff4c8-zkbtp" Sep 10 00:38:03.684539 kubelet[2518]: I0910 00:38:03.684111 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3635681-5879-4163-9c87-23257cb51626-tigera-ca-bundle\") pod \"calico-typha-c994ff4c8-zkbtp\" (UID: \"a3635681-5879-4163-9c87-23257cb51626\") " pod="calico-system/calico-typha-c994ff4c8-zkbtp" Sep 10 00:38:03.891192 systemd[1]: Created slice kubepods-besteffort-pod6b141d35_02c4_40b2_aa61_172b2ff1aa64.slice - libcontainer container kubepods-besteffort-pod6b141d35_02c4_40b2_aa61_172b2ff1aa64.slice. Sep 10 00:38:03.980207 kubelet[2518]: E0910 00:38:03.979726 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:03.983598 containerd[1471]: time="2025-09-10T00:38:03.983488731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c994ff4c8-zkbtp,Uid:a3635681-5879-4163-9c87-23257cb51626,Namespace:calico-system,Attempt:0,}" Sep 10 00:38:03.985727 kubelet[2518]: I0910 00:38:03.985686 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8ffs\" (UniqueName: \"kubernetes.io/projected/6b141d35-02c4-40b2-aa61-172b2ff1aa64-kube-api-access-r8ffs\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:03.985800 kubelet[2518]: I0910 00:38:03.985731 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6b141d35-02c4-40b2-aa61-172b2ff1aa64-policysync\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:03.985800 kubelet[2518]: I0910 00:38:03.985759 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6b141d35-02c4-40b2-aa61-172b2ff1aa64-flexvol-driver-host\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:03.985800 kubelet[2518]: I0910 00:38:03.985782 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6b141d35-02c4-40b2-aa61-172b2ff1aa64-var-lib-calico\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:03.985874 kubelet[2518]: I0910 00:38:03.985808 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6b141d35-02c4-40b2-aa61-172b2ff1aa64-cni-log-dir\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:03.985874 kubelet[2518]: I0910 00:38:03.985846 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b141d35-02c4-40b2-aa61-172b2ff1aa64-lib-modules\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:03.985930 kubelet[2518]: I0910 00:38:03.985875 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b141d35-02c4-40b2-aa61-172b2ff1aa64-tigera-ca-bundle\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:03.985930 kubelet[2518]: I0910 00:38:03.985893 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b141d35-02c4-40b2-aa61-172b2ff1aa64-xtables-lock\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:03.986006 kubelet[2518]: I0910 00:38:03.985929 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6b141d35-02c4-40b2-aa61-172b2ff1aa64-node-certs\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:03.986006 kubelet[2518]: I0910 00:38:03.985958 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6b141d35-02c4-40b2-aa61-172b2ff1aa64-var-run-calico\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:03.986006 kubelet[2518]: I0910 00:38:03.985985 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6b141d35-02c4-40b2-aa61-172b2ff1aa64-cni-bin-dir\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:03.986107 kubelet[2518]: I0910 00:38:03.986006 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6b141d35-02c4-40b2-aa61-172b2ff1aa64-cni-net-dir\") pod \"calico-node-n9fw5\" (UID: \"6b141d35-02c4-40b2-aa61-172b2ff1aa64\") " pod="calico-system/calico-node-n9fw5" Sep 10 00:38:04.302213 kubelet[2518]: E0910 00:38:04.301980 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2p7zs" podUID="2025065d-06f1-4598-a92c-46630a2af417" Sep 10 00:38:04.306879 containerd[1471]: time="2025-09-10T00:38:04.306453168Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:04.307110 containerd[1471]: time="2025-09-10T00:38:04.306889865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:04.307110 containerd[1471]: time="2025-09-10T00:38:04.306963536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:04.307180 containerd[1471]: time="2025-09-10T00:38:04.307105649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:04.342501 systemd[1]: Started cri-containerd-5e5f16338a79b8629214579d3681d4cea3ff1aa0c2a074ac8d1bd2218a40058b.scope - libcontainer container 5e5f16338a79b8629214579d3681d4cea3ff1aa0c2a074ac8d1bd2218a40058b. Sep 10 00:38:04.378097 kubelet[2518]: E0910 00:38:04.377995 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.379274 kubelet[2518]: W0910 00:38:04.378044 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.387932 kubelet[2518]: E0910 00:38:04.387873 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.388703 kubelet[2518]: E0910 00:38:04.388647 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.388703 kubelet[2518]: W0910 00:38:04.388696 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.388789 kubelet[2518]: E0910 00:38:04.388735 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.389085 kubelet[2518]: E0910 00:38:04.389061 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.389085 kubelet[2518]: W0910 00:38:04.389077 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.389085 kubelet[2518]: E0910 00:38:04.389087 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.389502 kubelet[2518]: E0910 00:38:04.389477 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.389502 kubelet[2518]: W0910 00:38:04.389495 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.389567 kubelet[2518]: E0910 00:38:04.389506 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.390322 kubelet[2518]: E0910 00:38:04.390296 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.391042 kubelet[2518]: W0910 00:38:04.391005 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.391042 kubelet[2518]: E0910 00:38:04.391031 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.392195 kubelet[2518]: E0910 00:38:04.392158 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.392195 kubelet[2518]: W0910 00:38:04.392180 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.392195 kubelet[2518]: E0910 00:38:04.392196 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.392575 kubelet[2518]: E0910 00:38:04.392552 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.392575 kubelet[2518]: W0910 00:38:04.392568 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.392575 kubelet[2518]: E0910 00:38:04.392578 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.392845 kubelet[2518]: E0910 00:38:04.392822 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.392845 kubelet[2518]: W0910 00:38:04.392837 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.392845 kubelet[2518]: E0910 00:38:04.392848 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.393370 kubelet[2518]: E0910 00:38:04.393329 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.393370 kubelet[2518]: W0910 00:38:04.393354 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.393460 kubelet[2518]: E0910 00:38:04.393382 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.393792 kubelet[2518]: E0910 00:38:04.393769 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.393792 kubelet[2518]: W0910 00:38:04.393787 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.393861 kubelet[2518]: E0910 00:38:04.393798 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.394136 kubelet[2518]: E0910 00:38:04.393994 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.394136 kubelet[2518]: W0910 00:38:04.394005 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.394136 kubelet[2518]: E0910 00:38:04.394014 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.394286 kubelet[2518]: E0910 00:38:04.394263 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.394286 kubelet[2518]: W0910 00:38:04.394280 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.394356 kubelet[2518]: E0910 00:38:04.394289 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.395086 kubelet[2518]: E0910 00:38:04.394556 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.395086 kubelet[2518]: W0910 00:38:04.394569 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.395086 kubelet[2518]: E0910 00:38:04.394577 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.395086 kubelet[2518]: E0910 00:38:04.394766 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.395086 kubelet[2518]: W0910 00:38:04.394776 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.395086 kubelet[2518]: E0910 00:38:04.394788 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.395086 kubelet[2518]: E0910 00:38:04.394977 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.395086 kubelet[2518]: W0910 00:38:04.394987 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.395086 kubelet[2518]: E0910 00:38:04.394995 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.395411 kubelet[2518]: E0910 00:38:04.395257 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.395411 kubelet[2518]: W0910 00:38:04.395266 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.395411 kubelet[2518]: E0910 00:38:04.395275 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.395588 kubelet[2518]: E0910 00:38:04.395564 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.395588 kubelet[2518]: W0910 00:38:04.395581 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.395656 kubelet[2518]: E0910 00:38:04.395590 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.395848 kubelet[2518]: E0910 00:38:04.395823 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.395848 kubelet[2518]: W0910 00:38:04.395840 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.395848 kubelet[2518]: E0910 00:38:04.395849 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.396525 kubelet[2518]: E0910 00:38:04.396067 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.396525 kubelet[2518]: W0910 00:38:04.396094 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.396525 kubelet[2518]: E0910 00:38:04.396106 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.396525 kubelet[2518]: E0910 00:38:04.396358 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.396525 kubelet[2518]: W0910 00:38:04.396367 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.396525 kubelet[2518]: E0910 00:38:04.396376 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.396784 kubelet[2518]: E0910 00:38:04.396713 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.396784 kubelet[2518]: W0910 00:38:04.396726 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.396784 kubelet[2518]: E0910 00:38:04.396738 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.396784 kubelet[2518]: I0910 00:38:04.396776 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt9dt\" (UniqueName: \"kubernetes.io/projected/2025065d-06f1-4598-a92c-46630a2af417-kube-api-access-tt9dt\") pod \"csi-node-driver-2p7zs\" (UID: \"2025065d-06f1-4598-a92c-46630a2af417\") " pod="calico-system/csi-node-driver-2p7zs" Sep 10 00:38:04.397256 kubelet[2518]: E0910 00:38:04.397189 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.397256 kubelet[2518]: W0910 00:38:04.397207 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.397256 kubelet[2518]: E0910 00:38:04.397224 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.397256 kubelet[2518]: I0910 00:38:04.397252 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2025065d-06f1-4598-a92c-46630a2af417-kubelet-dir\") pod \"csi-node-driver-2p7zs\" (UID: \"2025065d-06f1-4598-a92c-46630a2af417\") " pod="calico-system/csi-node-driver-2p7zs" Sep 10 00:38:04.397911 kubelet[2518]: E0910 00:38:04.397645 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.397911 kubelet[2518]: W0910 00:38:04.397662 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.397911 kubelet[2518]: E0910 00:38:04.397689 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.397911 kubelet[2518]: I0910 00:38:04.397708 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2025065d-06f1-4598-a92c-46630a2af417-registration-dir\") pod \"csi-node-driver-2p7zs\" (UID: \"2025065d-06f1-4598-a92c-46630a2af417\") " pod="calico-system/csi-node-driver-2p7zs" Sep 10 00:38:04.398030 kubelet[2518]: E0910 00:38:04.397984 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.398030 kubelet[2518]: W0910 00:38:04.397996 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.398030 kubelet[2518]: E0910 00:38:04.398011 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.398030 kubelet[2518]: I0910 00:38:04.398029 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2025065d-06f1-4598-a92c-46630a2af417-socket-dir\") pod \"csi-node-driver-2p7zs\" (UID: \"2025065d-06f1-4598-a92c-46630a2af417\") " pod="calico-system/csi-node-driver-2p7zs" Sep 10 00:38:04.398813 kubelet[2518]: E0910 00:38:04.398422 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.398813 kubelet[2518]: W0910 00:38:04.398440 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.398813 kubelet[2518]: E0910 00:38:04.398503 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.398813 kubelet[2518]: I0910 00:38:04.398543 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2025065d-06f1-4598-a92c-46630a2af417-varrun\") pod \"csi-node-driver-2p7zs\" (UID: \"2025065d-06f1-4598-a92c-46630a2af417\") " pod="calico-system/csi-node-driver-2p7zs" Sep 10 00:38:04.398813 kubelet[2518]: E0910 00:38:04.398769 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.398813 kubelet[2518]: W0910 00:38:04.398781 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.398983 kubelet[2518]: E0910 00:38:04.398867 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.399347 kubelet[2518]: E0910 00:38:04.399061 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.399347 kubelet[2518]: W0910 00:38:04.399075 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.399347 kubelet[2518]: E0910 00:38:04.399140 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.399473 kubelet[2518]: E0910 00:38:04.399415 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.399473 kubelet[2518]: W0910 00:38:04.399428 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.399535 kubelet[2518]: E0910 00:38:04.399515 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.399941 kubelet[2518]: E0910 00:38:04.399810 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.399941 kubelet[2518]: W0910 00:38:04.399824 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.399941 kubelet[2518]: E0910 00:38:04.399869 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.401104 kubelet[2518]: E0910 00:38:04.400057 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.401104 kubelet[2518]: W0910 00:38:04.400066 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.401104 kubelet[2518]: E0910 00:38:04.400143 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.401104 kubelet[2518]: E0910 00:38:04.400320 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.401104 kubelet[2518]: W0910 00:38:04.400327 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.401104 kubelet[2518]: E0910 00:38:04.400336 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.401104 kubelet[2518]: E0910 00:38:04.400560 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.401104 kubelet[2518]: W0910 00:38:04.400569 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.401104 kubelet[2518]: E0910 00:38:04.400578 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.401104 kubelet[2518]: E0910 00:38:04.400817 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.401400 kubelet[2518]: W0910 00:38:04.400825 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.401400 kubelet[2518]: E0910 00:38:04.400834 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.401400 kubelet[2518]: E0910 00:38:04.401231 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.401400 kubelet[2518]: W0910 00:38:04.401243 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.401400 kubelet[2518]: E0910 00:38:04.401255 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.401759 kubelet[2518]: E0910 00:38:04.401732 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.401759 kubelet[2518]: W0910 00:38:04.401753 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.401823 kubelet[2518]: E0910 00:38:04.401766 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.422733 containerd[1471]: time="2025-09-10T00:38:04.422641405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-c994ff4c8-zkbtp,Uid:a3635681-5879-4163-9c87-23257cb51626,Namespace:calico-system,Attempt:0,} returns sandbox id \"5e5f16338a79b8629214579d3681d4cea3ff1aa0c2a074ac8d1bd2218a40058b\"" Sep 10 00:38:04.426919 kubelet[2518]: E0910 00:38:04.426895 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:04.432110 containerd[1471]: time="2025-09-10T00:38:04.432037026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 00:38:04.494735 containerd[1471]: time="2025-09-10T00:38:04.494664917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9fw5,Uid:6b141d35-02c4-40b2-aa61-172b2ff1aa64,Namespace:calico-system,Attempt:0,}" Sep 10 00:38:04.499417 kubelet[2518]: E0910 00:38:04.499368 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.499417 kubelet[2518]: W0910 00:38:04.499396 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.499417 kubelet[2518]: E0910 00:38:04.499424 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.499870 kubelet[2518]: E0910 00:38:04.499832 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.499870 kubelet[2518]: W0910 00:38:04.499858 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.500055 kubelet[2518]: E0910 00:38:04.499895 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.500287 kubelet[2518]: E0910 00:38:04.500261 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.500287 kubelet[2518]: W0910 00:38:04.500282 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.500287 kubelet[2518]: E0910 00:38:04.500303 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.500794 kubelet[2518]: E0910 00:38:04.500771 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.500794 kubelet[2518]: W0910 00:38:04.500788 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.500888 kubelet[2518]: E0910 00:38:04.500805 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.501150 kubelet[2518]: E0910 00:38:04.501097 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.501150 kubelet[2518]: W0910 00:38:04.501133 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.501224 kubelet[2518]: E0910 00:38:04.501154 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.501423 kubelet[2518]: E0910 00:38:04.501401 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.501423 kubelet[2518]: W0910 00:38:04.501415 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.501534 kubelet[2518]: E0910 00:38:04.501444 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.501704 kubelet[2518]: E0910 00:38:04.501679 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.501704 kubelet[2518]: W0910 00:38:04.501704 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.501789 kubelet[2518]: E0910 00:38:04.501752 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.501973 kubelet[2518]: E0910 00:38:04.501954 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.501973 kubelet[2518]: W0910 00:38:04.501967 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.502059 kubelet[2518]: E0910 00:38:04.501999 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.502263 kubelet[2518]: E0910 00:38:04.502245 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.502263 kubelet[2518]: W0910 00:38:04.502259 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.502886 kubelet[2518]: E0910 00:38:04.502344 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.502886 kubelet[2518]: E0910 00:38:04.502529 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.502886 kubelet[2518]: W0910 00:38:04.502543 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.502886 kubelet[2518]: E0910 00:38:04.502597 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.502886 kubelet[2518]: E0910 00:38:04.502870 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.502886 kubelet[2518]: W0910 00:38:04.502884 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.503056 kubelet[2518]: E0910 00:38:04.502927 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.503276 kubelet[2518]: E0910 00:38:04.503257 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.503276 kubelet[2518]: W0910 00:38:04.503274 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.503366 kubelet[2518]: E0910 00:38:04.503328 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.503752 kubelet[2518]: E0910 00:38:04.503589 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.503752 kubelet[2518]: W0910 00:38:04.503605 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.503752 kubelet[2518]: E0910 00:38:04.503643 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.503935 kubelet[2518]: E0910 00:38:04.503913 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.503935 kubelet[2518]: W0910 00:38:04.503929 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.504021 kubelet[2518]: E0910 00:38:04.503990 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.504271 kubelet[2518]: E0910 00:38:04.504254 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.504373 kubelet[2518]: W0910 00:38:04.504345 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.504471 kubelet[2518]: E0910 00:38:04.504443 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.504674 kubelet[2518]: E0910 00:38:04.504656 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.504674 kubelet[2518]: W0910 00:38:04.504671 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.504766 kubelet[2518]: E0910 00:38:04.504715 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.504939 kubelet[2518]: E0910 00:38:04.504921 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.504939 kubelet[2518]: W0910 00:38:04.504935 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.505732 kubelet[2518]: E0910 00:38:04.504964 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.505732 kubelet[2518]: E0910 00:38:04.505211 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.505732 kubelet[2518]: W0910 00:38:04.505224 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.505732 kubelet[2518]: E0910 00:38:04.505243 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.505732 kubelet[2518]: E0910 00:38:04.505506 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.505732 kubelet[2518]: W0910 00:38:04.505518 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.505732 kubelet[2518]: E0910 00:38:04.505536 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.505944 kubelet[2518]: E0910 00:38:04.505853 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.505944 kubelet[2518]: W0910 00:38:04.505866 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.505944 kubelet[2518]: E0910 00:38:04.505900 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.506607 kubelet[2518]: E0910 00:38:04.506137 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.506607 kubelet[2518]: W0910 00:38:04.506157 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.506607 kubelet[2518]: E0910 00:38:04.506193 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.506607 kubelet[2518]: E0910 00:38:04.506415 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.506607 kubelet[2518]: W0910 00:38:04.506431 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.506607 kubelet[2518]: E0910 00:38:04.506478 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.506809 kubelet[2518]: E0910 00:38:04.506713 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.506809 kubelet[2518]: W0910 00:38:04.506726 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.506809 kubelet[2518]: E0910 00:38:04.506757 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.507104 kubelet[2518]: E0910 00:38:04.507082 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.507104 kubelet[2518]: W0910 00:38:04.507100 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.507242 kubelet[2518]: E0910 00:38:04.507215 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.507772 kubelet[2518]: E0910 00:38:04.507746 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.507772 kubelet[2518]: W0910 00:38:04.507764 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.507882 kubelet[2518]: E0910 00:38:04.507778 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.526770 kubelet[2518]: E0910 00:38:04.525404 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:04.526770 kubelet[2518]: W0910 00:38:04.525490 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:04.526770 kubelet[2518]: E0910 00:38:04.525582 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:04.535994 containerd[1471]: time="2025-09-10T00:38:04.535817991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:04.536173 containerd[1471]: time="2025-09-10T00:38:04.536009153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:04.536173 containerd[1471]: time="2025-09-10T00:38:04.536082213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:04.536378 containerd[1471]: time="2025-09-10T00:38:04.536306795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:04.567410 systemd[1]: Started cri-containerd-3850f2d1d47fcb6488735af4452c6d1c27b5ec556f39afd22a1a0b11d95640a9.scope - libcontainer container 3850f2d1d47fcb6488735af4452c6d1c27b5ec556f39afd22a1a0b11d95640a9. Sep 10 00:38:04.612409 containerd[1471]: time="2025-09-10T00:38:04.612349860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n9fw5,Uid:6b141d35-02c4-40b2-aa61-172b2ff1aa64,Namespace:calico-system,Attempt:0,} returns sandbox id \"3850f2d1d47fcb6488735af4452c6d1c27b5ec556f39afd22a1a0b11d95640a9\"" Sep 10 00:38:05.779879 kubelet[2518]: E0910 00:38:05.779797 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2p7zs" podUID="2025065d-06f1-4598-a92c-46630a2af417" Sep 10 00:38:06.245705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2260724572.mount: Deactivated successfully. Sep 10 00:38:06.651940 containerd[1471]: time="2025-09-10T00:38:06.651856376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:06.652793 containerd[1471]: time="2025-09-10T00:38:06.652710052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 10 00:38:06.654235 containerd[1471]: time="2025-09-10T00:38:06.654076040Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:06.656452 containerd[1471]: time="2025-09-10T00:38:06.656150779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:06.657390 containerd[1471]: time="2025-09-10T00:38:06.657340378Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.225263641s" Sep 10 00:38:06.657390 containerd[1471]: time="2025-09-10T00:38:06.657379718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 10 00:38:06.658961 containerd[1471]: time="2025-09-10T00:38:06.658758763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 00:38:06.674608 containerd[1471]: time="2025-09-10T00:38:06.673352450Z" level=info msg="CreateContainer within sandbox \"5e5f16338a79b8629214579d3681d4cea3ff1aa0c2a074ac8d1bd2218a40058b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 00:38:06.692817 containerd[1471]: time="2025-09-10T00:38:06.692732560Z" level=info msg="CreateContainer within sandbox \"5e5f16338a79b8629214579d3681d4cea3ff1aa0c2a074ac8d1bd2218a40058b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"10bcbb1dc3030b5fc19ca3b4072beaac8cd37e8830a58a74115b20c24bb57c46\"" Sep 10 00:38:06.695871 containerd[1471]: time="2025-09-10T00:38:06.695824197Z" level=info msg="StartContainer for \"10bcbb1dc3030b5fc19ca3b4072beaac8cd37e8830a58a74115b20c24bb57c46\"" Sep 10 00:38:06.728334 systemd[1]: Started cri-containerd-10bcbb1dc3030b5fc19ca3b4072beaac8cd37e8830a58a74115b20c24bb57c46.scope - libcontainer container 10bcbb1dc3030b5fc19ca3b4072beaac8cd37e8830a58a74115b20c24bb57c46. Sep 10 00:38:06.914786 containerd[1471]: time="2025-09-10T00:38:06.914618276Z" level=info msg="StartContainer for \"10bcbb1dc3030b5fc19ca3b4072beaac8cd37e8830a58a74115b20c24bb57c46\" returns successfully" Sep 10 00:38:07.782324 kubelet[2518]: E0910 00:38:07.782251 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2p7zs" podUID="2025065d-06f1-4598-a92c-46630a2af417" Sep 10 00:38:07.924487 kubelet[2518]: E0910 00:38:07.924439 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:07.948009 kubelet[2518]: I0910 00:38:07.947686 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-c994ff4c8-zkbtp" podStartSLOduration=2.718971833 podStartE2EDuration="4.947667495s" podCreationTimestamp="2025-09-10 00:38:03 +0000 UTC" firstStartedPulling="2025-09-10 00:38:04.429746167 +0000 UTC m=+19.741960856" lastFinishedPulling="2025-09-10 00:38:06.658441828 +0000 UTC m=+21.970656518" observedRunningTime="2025-09-10 00:38:07.946345349 +0000 UTC m=+23.258560038" watchObservedRunningTime="2025-09-10 00:38:07.947667495 +0000 UTC m=+23.259882184" Sep 10 00:38:08.019576 kubelet[2518]: E0910 00:38:08.019534 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.019576 kubelet[2518]: W0910 00:38:08.019559 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.020426 kubelet[2518]: E0910 00:38:08.020394 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.020908 kubelet[2518]: E0910 00:38:08.020891 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.020908 kubelet[2518]: W0910 00:38:08.020904 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.020970 kubelet[2518]: E0910 00:38:08.020917 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.021297 kubelet[2518]: E0910 00:38:08.021273 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.021297 kubelet[2518]: W0910 00:38:08.021288 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.021297 kubelet[2518]: E0910 00:38:08.021298 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.021562 kubelet[2518]: E0910 00:38:08.021547 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.021562 kubelet[2518]: W0910 00:38:08.021559 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.021623 kubelet[2518]: E0910 00:38:08.021569 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.021790 kubelet[2518]: E0910 00:38:08.021774 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.021790 kubelet[2518]: W0910 00:38:08.021787 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.021854 kubelet[2518]: E0910 00:38:08.021797 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.021983 kubelet[2518]: E0910 00:38:08.021968 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.021983 kubelet[2518]: W0910 00:38:08.021979 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.022054 kubelet[2518]: E0910 00:38:08.021987 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.022216 kubelet[2518]: E0910 00:38:08.022198 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.022216 kubelet[2518]: W0910 00:38:08.022212 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.022312 kubelet[2518]: E0910 00:38:08.022221 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.022406 kubelet[2518]: E0910 00:38:08.022393 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.022406 kubelet[2518]: W0910 00:38:08.022403 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.022463 kubelet[2518]: E0910 00:38:08.022411 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.022720 kubelet[2518]: E0910 00:38:08.022700 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.022720 kubelet[2518]: W0910 00:38:08.022715 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.022785 kubelet[2518]: E0910 00:38:08.022725 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.022906 kubelet[2518]: E0910 00:38:08.022893 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.022906 kubelet[2518]: W0910 00:38:08.022903 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.022989 kubelet[2518]: E0910 00:38:08.022911 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.023084 kubelet[2518]: E0910 00:38:08.023068 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.023084 kubelet[2518]: W0910 00:38:08.023078 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.023167 kubelet[2518]: E0910 00:38:08.023088 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.023346 kubelet[2518]: E0910 00:38:08.023330 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.023346 kubelet[2518]: W0910 00:38:08.023344 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.023412 kubelet[2518]: E0910 00:38:08.023356 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.023601 kubelet[2518]: E0910 00:38:08.023581 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.023601 kubelet[2518]: W0910 00:38:08.023598 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.023679 kubelet[2518]: E0910 00:38:08.023611 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.023898 kubelet[2518]: E0910 00:38:08.023872 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.023898 kubelet[2518]: W0910 00:38:08.023888 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.023950 kubelet[2518]: E0910 00:38:08.023899 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.024169 kubelet[2518]: E0910 00:38:08.024144 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.024195 kubelet[2518]: W0910 00:38:08.024170 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.024195 kubelet[2518]: E0910 00:38:08.024181 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.027495 kubelet[2518]: E0910 00:38:08.027475 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.027495 kubelet[2518]: W0910 00:38:08.027491 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.027569 kubelet[2518]: E0910 00:38:08.027504 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.027831 kubelet[2518]: E0910 00:38:08.027812 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.027831 kubelet[2518]: W0910 00:38:08.027826 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.028048 kubelet[2518]: E0910 00:38:08.027837 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.028220 kubelet[2518]: E0910 00:38:08.028188 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.028220 kubelet[2518]: W0910 00:38:08.028205 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.028220 kubelet[2518]: E0910 00:38:08.028215 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.031168 kubelet[2518]: E0910 00:38:08.031138 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.031168 kubelet[2518]: W0910 00:38:08.031163 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.031250 kubelet[2518]: E0910 00:38:08.031174 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.031469 kubelet[2518]: E0910 00:38:08.031428 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.031469 kubelet[2518]: W0910 00:38:08.031458 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.031648 kubelet[2518]: E0910 00:38:08.031490 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.031819 kubelet[2518]: E0910 00:38:08.031804 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.031819 kubelet[2518]: W0910 00:38:08.031816 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.031889 kubelet[2518]: E0910 00:38:08.031854 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.032050 kubelet[2518]: E0910 00:38:08.032029 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.032050 kubelet[2518]: W0910 00:38:08.032043 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.032147 kubelet[2518]: E0910 00:38:08.032069 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.032308 kubelet[2518]: E0910 00:38:08.032289 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.032308 kubelet[2518]: W0910 00:38:08.032300 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.032485 kubelet[2518]: E0910 00:38:08.032315 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.032918 kubelet[2518]: E0910 00:38:08.032903 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.032918 kubelet[2518]: W0910 00:38:08.032914 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.033018 kubelet[2518]: E0910 00:38:08.032939 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.033212 kubelet[2518]: E0910 00:38:08.033195 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.033212 kubelet[2518]: W0910 00:38:08.033206 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.033212 kubelet[2518]: E0910 00:38:08.033215 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.033413 kubelet[2518]: E0910 00:38:08.033398 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.033413 kubelet[2518]: W0910 00:38:08.033408 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.033413 kubelet[2518]: E0910 00:38:08.033416 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.033673 kubelet[2518]: E0910 00:38:08.033607 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.033673 kubelet[2518]: W0910 00:38:08.033615 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.033673 kubelet[2518]: E0910 00:38:08.033623 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.034001 kubelet[2518]: E0910 00:38:08.033985 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.034001 kubelet[2518]: W0910 00:38:08.033997 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.034086 kubelet[2518]: E0910 00:38:08.034008 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.037398 kubelet[2518]: E0910 00:38:08.037379 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.037398 kubelet[2518]: W0910 00:38:08.037394 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.037504 kubelet[2518]: E0910 00:38:08.037409 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.037663 kubelet[2518]: E0910 00:38:08.037646 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.037663 kubelet[2518]: W0910 00:38:08.037659 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.037733 kubelet[2518]: E0910 00:38:08.037670 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.037895 kubelet[2518]: E0910 00:38:08.037873 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.037895 kubelet[2518]: W0910 00:38:08.037884 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.037895 kubelet[2518]: E0910 00:38:08.037892 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.038136 kubelet[2518]: E0910 00:38:08.038096 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.038136 kubelet[2518]: W0910 00:38:08.038106 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.038136 kubelet[2518]: E0910 00:38:08.038134 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.038695 kubelet[2518]: E0910 00:38:08.038646 2518 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:38:08.038695 kubelet[2518]: W0910 00:38:08.038664 2518 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:38:08.038695 kubelet[2518]: E0910 00:38:08.038678 2518 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:38:08.157827 containerd[1471]: time="2025-09-10T00:38:08.157750571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:08.158746 containerd[1471]: time="2025-09-10T00:38:08.158698761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 10 00:38:08.159938 containerd[1471]: time="2025-09-10T00:38:08.159898438Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:08.162346 containerd[1471]: time="2025-09-10T00:38:08.162303482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:08.163054 containerd[1471]: time="2025-09-10T00:38:08.163018263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.504223026s" Sep 10 00:38:08.163211 containerd[1471]: time="2025-09-10T00:38:08.163056630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 10 00:38:08.169417 containerd[1471]: time="2025-09-10T00:38:08.169362021Z" level=info msg="CreateContainer within sandbox \"3850f2d1d47fcb6488735af4452c6d1c27b5ec556f39afd22a1a0b11d95640a9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 00:38:08.188405 containerd[1471]: time="2025-09-10T00:38:08.188320881Z" level=info msg="CreateContainer within sandbox \"3850f2d1d47fcb6488735af4452c6d1c27b5ec556f39afd22a1a0b11d95640a9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ff467ccefaf1bc095975622729d90108c7711ca4fa4933df4e263f91bbfd73bc\"" Sep 10 00:38:08.189090 containerd[1471]: time="2025-09-10T00:38:08.188954749Z" level=info msg="StartContainer for \"ff467ccefaf1bc095975622729d90108c7711ca4fa4933df4e263f91bbfd73bc\"" Sep 10 00:38:08.228428 systemd[1]: Started cri-containerd-ff467ccefaf1bc095975622729d90108c7711ca4fa4933df4e263f91bbfd73bc.scope - libcontainer container ff467ccefaf1bc095975622729d90108c7711ca4fa4933df4e263f91bbfd73bc. Sep 10 00:38:08.327094 systemd[1]: cri-containerd-ff467ccefaf1bc095975622729d90108c7711ca4fa4933df4e263f91bbfd73bc.scope: Deactivated successfully. Sep 10 00:38:08.570966 containerd[1471]: time="2025-09-10T00:38:08.570874406Z" level=info msg="StartContainer for \"ff467ccefaf1bc095975622729d90108c7711ca4fa4933df4e263f91bbfd73bc\" returns successfully" Sep 10 00:38:08.597915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff467ccefaf1bc095975622729d90108c7711ca4fa4933df4e263f91bbfd73bc-rootfs.mount: Deactivated successfully. Sep 10 00:38:08.925382 kubelet[2518]: I0910 00:38:08.925265 2518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:38:08.928206 kubelet[2518]: E0910 00:38:08.928186 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:08.928311 kubelet[2518]: E0910 00:38:08.928241 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2p7zs" podUID="2025065d-06f1-4598-a92c-46630a2af417" Sep 10 00:38:08.940902 containerd[1471]: time="2025-09-10T00:38:08.940783630Z" level=info msg="shim disconnected" id=ff467ccefaf1bc095975622729d90108c7711ca4fa4933df4e263f91bbfd73bc namespace=k8s.io Sep 10 00:38:08.940902 containerd[1471]: time="2025-09-10T00:38:08.940861367Z" level=warning msg="cleaning up after shim disconnected" id=ff467ccefaf1bc095975622729d90108c7711ca4fa4933df4e263f91bbfd73bc namespace=k8s.io Sep 10 00:38:08.940902 containerd[1471]: time="2025-09-10T00:38:08.940870305Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:38:09.929797 containerd[1471]: time="2025-09-10T00:38:09.929749139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 00:38:10.780898 kubelet[2518]: E0910 00:38:10.780826 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2p7zs" podUID="2025065d-06f1-4598-a92c-46630a2af417" Sep 10 00:38:12.873274 kubelet[2518]: E0910 00:38:12.873173 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2p7zs" podUID="2025065d-06f1-4598-a92c-46630a2af417" Sep 10 00:38:12.906354 containerd[1471]: time="2025-09-10T00:38:12.906289790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:12.907369 containerd[1471]: time="2025-09-10T00:38:12.907326691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 10 00:38:12.908578 containerd[1471]: time="2025-09-10T00:38:12.908534627Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:12.912264 containerd[1471]: time="2025-09-10T00:38:12.912229198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:12.912818 containerd[1471]: time="2025-09-10T00:38:12.912784588Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 2.982995729s" Sep 10 00:38:12.912818 containerd[1471]: time="2025-09-10T00:38:12.912814729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 10 00:38:12.914651 containerd[1471]: time="2025-09-10T00:38:12.914617816Z" level=info msg="CreateContainer within sandbox \"3850f2d1d47fcb6488735af4452c6d1c27b5ec556f39afd22a1a0b11d95640a9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 00:38:12.942933 containerd[1471]: time="2025-09-10T00:38:12.942851448Z" level=info msg="CreateContainer within sandbox \"3850f2d1d47fcb6488735af4452c6d1c27b5ec556f39afd22a1a0b11d95640a9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"64f7ab8c5b8a7a44b42019606d772b0cc8f8f71cb8f3e9c985070b68f66735e9\"" Sep 10 00:38:12.943980 containerd[1471]: time="2025-09-10T00:38:12.943800301Z" level=info msg="StartContainer for \"64f7ab8c5b8a7a44b42019606d772b0cc8f8f71cb8f3e9c985070b68f66735e9\"" Sep 10 00:38:12.981297 systemd[1]: Started cri-containerd-64f7ab8c5b8a7a44b42019606d772b0cc8f8f71cb8f3e9c985070b68f66735e9.scope - libcontainer container 64f7ab8c5b8a7a44b42019606d772b0cc8f8f71cb8f3e9c985070b68f66735e9. Sep 10 00:38:13.017994 containerd[1471]: time="2025-09-10T00:38:13.017938409Z" level=info msg="StartContainer for \"64f7ab8c5b8a7a44b42019606d772b0cc8f8f71cb8f3e9c985070b68f66735e9\" returns successfully" Sep 10 00:38:14.780613 kubelet[2518]: E0910 00:38:14.780539 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2p7zs" podUID="2025065d-06f1-4598-a92c-46630a2af417" Sep 10 00:38:14.964507 systemd[1]: cri-containerd-64f7ab8c5b8a7a44b42019606d772b0cc8f8f71cb8f3e9c985070b68f66735e9.scope: Deactivated successfully. Sep 10 00:38:14.990938 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64f7ab8c5b8a7a44b42019606d772b0cc8f8f71cb8f3e9c985070b68f66735e9-rootfs.mount: Deactivated successfully. Sep 10 00:38:15.013978 kubelet[2518]: I0910 00:38:15.013926 2518 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 00:38:15.068482 systemd[1]: Created slice kubepods-burstable-podac5d8d6b_a420_4114_bc22_8d86a77072d3.slice - libcontainer container kubepods-burstable-podac5d8d6b_a420_4114_bc22_8d86a77072d3.slice. Sep 10 00:38:15.075844 kubelet[2518]: I0910 00:38:15.074872 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlkqt\" (UniqueName: \"kubernetes.io/projected/90f17e30-9dec-4e54-8e40-8da4f9ce8c2b-kube-api-access-jlkqt\") pod \"calico-apiserver-6c7c97f695-rjghd\" (UID: \"90f17e30-9dec-4e54-8e40-8da4f9ce8c2b\") " pod="calico-apiserver/calico-apiserver-6c7c97f695-rjghd" Sep 10 00:38:15.075844 kubelet[2518]: I0910 00:38:15.074954 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e61da1d8-2e2d-4754-b234-25357c6e33b4-whisker-backend-key-pair\") pod \"whisker-678754df47-sfdv9\" (UID: \"e61da1d8-2e2d-4754-b234-25357c6e33b4\") " pod="calico-system/whisker-678754df47-sfdv9" Sep 10 00:38:15.075844 kubelet[2518]: I0910 00:38:15.074978 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30f4ddb7-eda4-4f91-9889-caa4c6fe0752-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-bwmkx\" (UID: \"30f4ddb7-eda4-4f91-9889-caa4c6fe0752\") " pod="calico-system/goldmane-54d579b49d-bwmkx" Sep 10 00:38:15.075844 kubelet[2518]: I0910 00:38:15.075046 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjmdl\" (UniqueName: \"kubernetes.io/projected/9c78a26e-1b4d-439c-a913-c9a5704cad9a-kube-api-access-sjmdl\") pod \"calico-apiserver-6c7c97f695-66hjz\" (UID: \"9c78a26e-1b4d-439c-a913-c9a5704cad9a\") " pod="calico-apiserver/calico-apiserver-6c7c97f695-66hjz" Sep 10 00:38:15.075844 kubelet[2518]: I0910 00:38:15.075087 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c002682-d6bc-423d-b99d-e4ec01f48f3d-config-volume\") pod \"coredns-668d6bf9bc-kjzlx\" (UID: \"9c002682-d6bc-423d-b99d-e4ec01f48f3d\") " pod="kube-system/coredns-668d6bf9bc-kjzlx" Sep 10 00:38:15.076107 kubelet[2518]: I0910 00:38:15.075109 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30f4ddb7-eda4-4f91-9889-caa4c6fe0752-config\") pod \"goldmane-54d579b49d-bwmkx\" (UID: \"30f4ddb7-eda4-4f91-9889-caa4c6fe0752\") " pod="calico-system/goldmane-54d579b49d-bwmkx" Sep 10 00:38:15.076107 kubelet[2518]: I0910 00:38:15.075201 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ac5d8d6b-a420-4114-bc22-8d86a77072d3-config-volume\") pod \"coredns-668d6bf9bc-lvpsb\" (UID: \"ac5d8d6b-a420-4114-bc22-8d86a77072d3\") " pod="kube-system/coredns-668d6bf9bc-lvpsb" Sep 10 00:38:15.076107 kubelet[2518]: I0910 00:38:15.075226 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwprf\" (UniqueName: \"kubernetes.io/projected/30f4ddb7-eda4-4f91-9889-caa4c6fe0752-kube-api-access-jwprf\") pod \"goldmane-54d579b49d-bwmkx\" (UID: \"30f4ddb7-eda4-4f91-9889-caa4c6fe0752\") " pod="calico-system/goldmane-54d579b49d-bwmkx" Sep 10 00:38:15.076107 kubelet[2518]: I0910 00:38:15.075245 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5f96b\" (UniqueName: \"kubernetes.io/projected/ac5d8d6b-a420-4114-bc22-8d86a77072d3-kube-api-access-5f96b\") pod \"coredns-668d6bf9bc-lvpsb\" (UID: \"ac5d8d6b-a420-4114-bc22-8d86a77072d3\") " pod="kube-system/coredns-668d6bf9bc-lvpsb" Sep 10 00:38:15.076107 kubelet[2518]: I0910 00:38:15.075269 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs4gc\" (UniqueName: \"kubernetes.io/projected/e61da1d8-2e2d-4754-b234-25357c6e33b4-kube-api-access-gs4gc\") pod \"whisker-678754df47-sfdv9\" (UID: \"e61da1d8-2e2d-4754-b234-25357c6e33b4\") " pod="calico-system/whisker-678754df47-sfdv9" Sep 10 00:38:15.076286 kubelet[2518]: I0910 00:38:15.075296 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3bc944ca-86e6-4492-bde5-0808ef5e617e-tigera-ca-bundle\") pod \"calico-kube-controllers-c6cb89779-d447p\" (UID: \"3bc944ca-86e6-4492-bde5-0808ef5e617e\") " pod="calico-system/calico-kube-controllers-c6cb89779-d447p" Sep 10 00:38:15.076286 kubelet[2518]: I0910 00:38:15.075316 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e61da1d8-2e2d-4754-b234-25357c6e33b4-whisker-ca-bundle\") pod \"whisker-678754df47-sfdv9\" (UID: \"e61da1d8-2e2d-4754-b234-25357c6e33b4\") " pod="calico-system/whisker-678754df47-sfdv9" Sep 10 00:38:15.076286 kubelet[2518]: I0910 00:38:15.075342 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/30f4ddb7-eda4-4f91-9889-caa4c6fe0752-goldmane-key-pair\") pod \"goldmane-54d579b49d-bwmkx\" (UID: \"30f4ddb7-eda4-4f91-9889-caa4c6fe0752\") " pod="calico-system/goldmane-54d579b49d-bwmkx" Sep 10 00:38:15.076286 kubelet[2518]: I0910 00:38:15.075367 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/90f17e30-9dec-4e54-8e40-8da4f9ce8c2b-calico-apiserver-certs\") pod \"calico-apiserver-6c7c97f695-rjghd\" (UID: \"90f17e30-9dec-4e54-8e40-8da4f9ce8c2b\") " pod="calico-apiserver/calico-apiserver-6c7c97f695-rjghd" Sep 10 00:38:15.076286 kubelet[2518]: I0910 00:38:15.075389 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5bmj\" (UniqueName: \"kubernetes.io/projected/3bc944ca-86e6-4492-bde5-0808ef5e617e-kube-api-access-b5bmj\") pod \"calico-kube-controllers-c6cb89779-d447p\" (UID: \"3bc944ca-86e6-4492-bde5-0808ef5e617e\") " pod="calico-system/calico-kube-controllers-c6cb89779-d447p" Sep 10 00:38:15.076406 kubelet[2518]: I0910 00:38:15.075410 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c78a26e-1b4d-439c-a913-c9a5704cad9a-calico-apiserver-certs\") pod \"calico-apiserver-6c7c97f695-66hjz\" (UID: \"9c78a26e-1b4d-439c-a913-c9a5704cad9a\") " pod="calico-apiserver/calico-apiserver-6c7c97f695-66hjz" Sep 10 00:38:15.076406 kubelet[2518]: I0910 00:38:15.075433 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xt72\" (UniqueName: \"kubernetes.io/projected/9c002682-d6bc-423d-b99d-e4ec01f48f3d-kube-api-access-5xt72\") pod \"coredns-668d6bf9bc-kjzlx\" (UID: \"9c002682-d6bc-423d-b99d-e4ec01f48f3d\") " pod="kube-system/coredns-668d6bf9bc-kjzlx" Sep 10 00:38:15.076776 systemd[1]: Created slice kubepods-besteffort-pod90f17e30_9dec_4e54_8e40_8da4f9ce8c2b.slice - libcontainer container kubepods-besteffort-pod90f17e30_9dec_4e54_8e40_8da4f9ce8c2b.slice. Sep 10 00:38:15.082436 systemd[1]: Created slice kubepods-besteffort-pode61da1d8_2e2d_4754_b234_25357c6e33b4.slice - libcontainer container kubepods-besteffort-pode61da1d8_2e2d_4754_b234_25357c6e33b4.slice. Sep 10 00:38:15.299885 systemd[1]: Created slice kubepods-besteffort-pod3bc944ca_86e6_4492_bde5_0808ef5e617e.slice - libcontainer container kubepods-besteffort-pod3bc944ca_86e6_4492_bde5_0808ef5e617e.slice. Sep 10 00:38:15.303903 systemd[1]: Created slice kubepods-besteffort-pod9c78a26e_1b4d_439c_a913_c9a5704cad9a.slice - libcontainer container kubepods-besteffort-pod9c78a26e_1b4d_439c_a913_c9a5704cad9a.slice. Sep 10 00:38:15.309822 systemd[1]: Created slice kubepods-burstable-pod9c002682_d6bc_423d_b99d_e4ec01f48f3d.slice - libcontainer container kubepods-burstable-pod9c002682_d6bc_423d_b99d_e4ec01f48f3d.slice. Sep 10 00:38:15.312533 kubelet[2518]: E0910 00:38:15.312501 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:15.315838 systemd[1]: Created slice kubepods-besteffort-pod30f4ddb7_eda4_4f91_9889_caa4c6fe0752.slice - libcontainer container kubepods-besteffort-pod30f4ddb7_eda4_4f91_9889_caa4c6fe0752.slice. Sep 10 00:38:15.375772 containerd[1471]: time="2025-09-10T00:38:15.375701765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kjzlx,Uid:9c002682-d6bc-423d-b99d-e4ec01f48f3d,Namespace:kube-system,Attempt:0,}" Sep 10 00:38:15.375772 containerd[1471]: time="2025-09-10T00:38:15.375743508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bwmkx,Uid:30f4ddb7-eda4-4f91-9889-caa4c6fe0752,Namespace:calico-system,Attempt:0,}" Sep 10 00:38:15.376399 containerd[1471]: time="2025-09-10T00:38:15.375701775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7c97f695-66hjz,Uid:9c78a26e-1b4d-439c-a913-c9a5704cad9a,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:38:15.376399 containerd[1471]: time="2025-09-10T00:38:15.376281427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6cb89779-d447p,Uid:3bc944ca-86e6-4492-bde5-0808ef5e617e,Namespace:calico-system,Attempt:0,}" Sep 10 00:38:15.376452 kubelet[2518]: E0910 00:38:15.376156 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:15.376803 containerd[1471]: time="2025-09-10T00:38:15.376772341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lvpsb,Uid:ac5d8d6b-a420-4114-bc22-8d86a77072d3,Namespace:kube-system,Attempt:0,}" Sep 10 00:38:15.380509 containerd[1471]: time="2025-09-10T00:38:15.380479737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7c97f695-rjghd,Uid:90f17e30-9dec-4e54-8e40-8da4f9ce8c2b,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:38:15.385350 containerd[1471]: time="2025-09-10T00:38:15.385298250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-678754df47-sfdv9,Uid:e61da1d8-2e2d-4754-b234-25357c6e33b4,Namespace:calico-system,Attempt:0,}" Sep 10 00:38:15.428134 containerd[1471]: time="2025-09-10T00:38:15.428034096Z" level=info msg="shim disconnected" id=64f7ab8c5b8a7a44b42019606d772b0cc8f8f71cb8f3e9c985070b68f66735e9 namespace=k8s.io Sep 10 00:38:15.428134 containerd[1471]: time="2025-09-10T00:38:15.428100099Z" level=warning msg="cleaning up after shim disconnected" id=64f7ab8c5b8a7a44b42019606d772b0cc8f8f71cb8f3e9c985070b68f66735e9 namespace=k8s.io Sep 10 00:38:15.428134 containerd[1471]: time="2025-09-10T00:38:15.428112304Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:38:15.699282 containerd[1471]: time="2025-09-10T00:38:15.699053120Z" level=error msg="Failed to destroy network for sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.700232 containerd[1471]: time="2025-09-10T00:38:15.699598844Z" level=error msg="encountered an error cleaning up failed sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.700232 containerd[1471]: time="2025-09-10T00:38:15.699667693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-678754df47-sfdv9,Uid:e61da1d8-2e2d-4754-b234-25357c6e33b4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.719387 kubelet[2518]: E0910 00:38:15.719313 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.719684 kubelet[2518]: E0910 00:38:15.719425 2518 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-678754df47-sfdv9" Sep 10 00:38:15.719684 kubelet[2518]: E0910 00:38:15.719461 2518 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-678754df47-sfdv9" Sep 10 00:38:15.719684 kubelet[2518]: E0910 00:38:15.719527 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-678754df47-sfdv9_calico-system(e61da1d8-2e2d-4754-b234-25357c6e33b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-678754df47-sfdv9_calico-system(e61da1d8-2e2d-4754-b234-25357c6e33b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-678754df47-sfdv9" podUID="e61da1d8-2e2d-4754-b234-25357c6e33b4" Sep 10 00:38:15.727416 containerd[1471]: time="2025-09-10T00:38:15.727333957Z" level=error msg="Failed to destroy network for sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.727931 containerd[1471]: time="2025-09-10T00:38:15.727893018Z" level=error msg="encountered an error cleaning up failed sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.727990 containerd[1471]: time="2025-09-10T00:38:15.727966937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kjzlx,Uid:9c002682-d6bc-423d-b99d-e4ec01f48f3d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.728328 kubelet[2518]: E0910 00:38:15.728243 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.728457 kubelet[2518]: E0910 00:38:15.728325 2518 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kjzlx" Sep 10 00:38:15.728457 kubelet[2518]: E0910 00:38:15.728434 2518 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kjzlx" Sep 10 00:38:15.728600 kubelet[2518]: E0910 00:38:15.728488 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kjzlx_kube-system(9c002682-d6bc-423d-b99d-e4ec01f48f3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kjzlx_kube-system(9c002682-d6bc-423d-b99d-e4ec01f48f3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kjzlx" podUID="9c002682-d6bc-423d-b99d-e4ec01f48f3d" Sep 10 00:38:15.731992 containerd[1471]: time="2025-09-10T00:38:15.731944543Z" level=error msg="Failed to destroy network for sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.732682 containerd[1471]: time="2025-09-10T00:38:15.732648465Z" level=error msg="encountered an error cleaning up failed sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.732832 containerd[1471]: time="2025-09-10T00:38:15.732801192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7c97f695-rjghd,Uid:90f17e30-9dec-4e54-8e40-8da4f9ce8c2b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.736059 kubelet[2518]: E0910 00:38:15.733293 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.736059 kubelet[2518]: E0910 00:38:15.733552 2518 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7c97f695-rjghd" Sep 10 00:38:15.736059 kubelet[2518]: E0910 00:38:15.733581 2518 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7c97f695-rjghd" Sep 10 00:38:15.736234 kubelet[2518]: E0910 00:38:15.733655 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c7c97f695-rjghd_calico-apiserver(90f17e30-9dec-4e54-8e40-8da4f9ce8c2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c7c97f695-rjghd_calico-apiserver(90f17e30-9dec-4e54-8e40-8da4f9ce8c2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c7c97f695-rjghd" podUID="90f17e30-9dec-4e54-8e40-8da4f9ce8c2b" Sep 10 00:38:15.748450 containerd[1471]: time="2025-09-10T00:38:15.748373111Z" level=error msg="Failed to destroy network for sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.749511 containerd[1471]: time="2025-09-10T00:38:15.749475271Z" level=error msg="encountered an error cleaning up failed sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.749786 containerd[1471]: time="2025-09-10T00:38:15.749734772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7c97f695-66hjz,Uid:9c78a26e-1b4d-439c-a913-c9a5704cad9a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.750384 kubelet[2518]: E0910 00:38:15.750317 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.750474 kubelet[2518]: E0910 00:38:15.750418 2518 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7c97f695-66hjz" Sep 10 00:38:15.750474 kubelet[2518]: E0910 00:38:15.750446 2518 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c7c97f695-66hjz" Sep 10 00:38:15.750549 kubelet[2518]: E0910 00:38:15.750500 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c7c97f695-66hjz_calico-apiserver(9c78a26e-1b4d-439c-a913-c9a5704cad9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c7c97f695-66hjz_calico-apiserver(9c78a26e-1b4d-439c-a913-c9a5704cad9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c7c97f695-66hjz" podUID="9c78a26e-1b4d-439c-a913-c9a5704cad9a" Sep 10 00:38:15.757268 containerd[1471]: time="2025-09-10T00:38:15.756962978Z" level=error msg="Failed to destroy network for sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.757601 containerd[1471]: time="2025-09-10T00:38:15.757560175Z" level=error msg="encountered an error cleaning up failed sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.757674 containerd[1471]: time="2025-09-10T00:38:15.757631027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lvpsb,Uid:ac5d8d6b-a420-4114-bc22-8d86a77072d3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.759195 kubelet[2518]: E0910 00:38:15.757917 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.759195 kubelet[2518]: E0910 00:38:15.758001 2518 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lvpsb" Sep 10 00:38:15.759195 kubelet[2518]: E0910 00:38:15.758043 2518 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lvpsb" Sep 10 00:38:15.759377 kubelet[2518]: E0910 00:38:15.758107 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lvpsb_kube-system(ac5d8d6b-a420-4114-bc22-8d86a77072d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lvpsb_kube-system(ac5d8d6b-a420-4114-bc22-8d86a77072d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lvpsb" podUID="ac5d8d6b-a420-4114-bc22-8d86a77072d3" Sep 10 00:38:15.763086 containerd[1471]: time="2025-09-10T00:38:15.763026417Z" level=error msg="Failed to destroy network for sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.763556 containerd[1471]: time="2025-09-10T00:38:15.763522222Z" level=error msg="encountered an error cleaning up failed sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.763613 containerd[1471]: time="2025-09-10T00:38:15.763586651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6cb89779-d447p,Uid:3bc944ca-86e6-4492-bde5-0808ef5e617e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.764061 kubelet[2518]: E0910 00:38:15.763826 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.764061 kubelet[2518]: E0910 00:38:15.763912 2518 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6cb89779-d447p" Sep 10 00:38:15.764061 kubelet[2518]: E0910 00:38:15.763938 2518 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c6cb89779-d447p" Sep 10 00:38:15.764230 kubelet[2518]: E0910 00:38:15.763997 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c6cb89779-d447p_calico-system(3bc944ca-86e6-4492-bde5-0808ef5e617e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c6cb89779-d447p_calico-system(3bc944ca-86e6-4492-bde5-0808ef5e617e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c6cb89779-d447p" podUID="3bc944ca-86e6-4492-bde5-0808ef5e617e" Sep 10 00:38:15.766209 containerd[1471]: time="2025-09-10T00:38:15.766144451Z" level=error msg="Failed to destroy network for sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.766628 containerd[1471]: time="2025-09-10T00:38:15.766584824Z" level=error msg="encountered an error cleaning up failed sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.766670 containerd[1471]: time="2025-09-10T00:38:15.766642991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bwmkx,Uid:30f4ddb7-eda4-4f91-9889-caa4c6fe0752,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.766962 kubelet[2518]: E0910 00:38:15.766913 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:15.767030 kubelet[2518]: E0910 00:38:15.766980 2518 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-bwmkx" Sep 10 00:38:15.767030 kubelet[2518]: E0910 00:38:15.767004 2518 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-bwmkx" Sep 10 00:38:15.767137 kubelet[2518]: E0910 00:38:15.767059 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-bwmkx_calico-system(30f4ddb7-eda4-4f91-9889-caa4c6fe0752)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-bwmkx_calico-system(30f4ddb7-eda4-4f91-9889-caa4c6fe0752)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-bwmkx" podUID="30f4ddb7-eda4-4f91-9889-caa4c6fe0752" Sep 10 00:38:16.104789 kubelet[2518]: I0910 00:38:16.104744 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:16.106559 kubelet[2518]: I0910 00:38:16.106541 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:16.112016 containerd[1471]: time="2025-09-10T00:38:16.111979453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 00:38:16.112417 kubelet[2518]: I0910 00:38:16.112273 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:16.116414 kubelet[2518]: I0910 00:38:16.116390 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:16.134635 containerd[1471]: time="2025-09-10T00:38:16.134393405Z" level=info msg="StopPodSandbox for \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\"" Sep 10 00:38:16.134975 containerd[1471]: time="2025-09-10T00:38:16.134956001Z" level=info msg="StopPodSandbox for \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\"" Sep 10 00:38:16.137196 containerd[1471]: time="2025-09-10T00:38:16.137016146Z" level=info msg="StopPodSandbox for \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\"" Sep 10 00:38:16.139694 containerd[1471]: time="2025-09-10T00:38:16.138529214Z" level=info msg="Ensure that sandbox 2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858 in task-service has been cleanup successfully" Sep 10 00:38:16.139694 containerd[1471]: time="2025-09-10T00:38:16.138583222Z" level=info msg="StopPodSandbox for \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\"" Sep 10 00:38:16.139694 containerd[1471]: time="2025-09-10T00:38:16.138739746Z" level=info msg="Ensure that sandbox f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48 in task-service has been cleanup successfully" Sep 10 00:38:16.139694 containerd[1471]: time="2025-09-10T00:38:16.138542331Z" level=info msg="Ensure that sandbox 03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf in task-service has been cleanup successfully" Sep 10 00:38:16.139862 kubelet[2518]: I0910 00:38:16.138978 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:16.142540 containerd[1471]: time="2025-09-10T00:38:16.142503651Z" level=info msg="StopPodSandbox for \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\"" Sep 10 00:38:16.142685 containerd[1471]: time="2025-09-10T00:38:16.142670606Z" level=info msg="Ensure that sandbox 0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4 in task-service has been cleanup successfully" Sep 10 00:38:16.142809 containerd[1471]: time="2025-09-10T00:38:16.138541830Z" level=info msg="Ensure that sandbox d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f in task-service has been cleanup successfully" Sep 10 00:38:16.144092 kubelet[2518]: I0910 00:38:16.144022 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:16.145533 containerd[1471]: time="2025-09-10T00:38:16.145485020Z" level=info msg="StopPodSandbox for \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\"" Sep 10 00:38:16.146154 containerd[1471]: time="2025-09-10T00:38:16.146035231Z" level=info msg="Ensure that sandbox fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291 in task-service has been cleanup successfully" Sep 10 00:38:16.147956 kubelet[2518]: I0910 00:38:16.147918 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:16.150411 containerd[1471]: time="2025-09-10T00:38:16.150344248Z" level=info msg="StopPodSandbox for \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\"" Sep 10 00:38:16.152391 containerd[1471]: time="2025-09-10T00:38:16.151934330Z" level=info msg="Ensure that sandbox c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7 in task-service has been cleanup successfully" Sep 10 00:38:16.208249 containerd[1471]: time="2025-09-10T00:38:16.208180341Z" level=error msg="StopPodSandbox for \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\" failed" error="failed to destroy network for sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:16.208915 kubelet[2518]: E0910 00:38:16.208691 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:16.208915 kubelet[2518]: E0910 00:38:16.208780 2518 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858"} Sep 10 00:38:16.208915 kubelet[2518]: E0910 00:38:16.208859 2518 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e61da1d8-2e2d-4754-b234-25357c6e33b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:38:16.208915 kubelet[2518]: E0910 00:38:16.208896 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e61da1d8-2e2d-4754-b234-25357c6e33b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-678754df47-sfdv9" podUID="e61da1d8-2e2d-4754-b234-25357c6e33b4" Sep 10 00:38:16.214140 containerd[1471]: time="2025-09-10T00:38:16.213044139Z" level=error msg="StopPodSandbox for \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\" failed" error="failed to destroy network for sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:16.216260 kubelet[2518]: E0910 00:38:16.215323 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:16.216260 kubelet[2518]: E0910 00:38:16.215407 2518 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf"} Sep 10 00:38:16.216260 kubelet[2518]: E0910 00:38:16.215459 2518 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90f17e30-9dec-4e54-8e40-8da4f9ce8c2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:38:16.216260 kubelet[2518]: E0910 00:38:16.215492 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90f17e30-9dec-4e54-8e40-8da4f9ce8c2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c7c97f695-rjghd" podUID="90f17e30-9dec-4e54-8e40-8da4f9ce8c2b" Sep 10 00:38:16.219043 containerd[1471]: time="2025-09-10T00:38:16.218981966Z" level=error msg="StopPodSandbox for \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\" failed" error="failed to destroy network for sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:16.219329 kubelet[2518]: E0910 00:38:16.219255 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:16.219329 kubelet[2518]: E0910 00:38:16.219321 2518 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48"} Sep 10 00:38:16.219427 kubelet[2518]: E0910 00:38:16.219369 2518 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ac5d8d6b-a420-4114-bc22-8d86a77072d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:38:16.219427 kubelet[2518]: E0910 00:38:16.219398 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ac5d8d6b-a420-4114-bc22-8d86a77072d3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lvpsb" podUID="ac5d8d6b-a420-4114-bc22-8d86a77072d3" Sep 10 00:38:16.226149 containerd[1471]: time="2025-09-10T00:38:16.223713698Z" level=error msg="StopPodSandbox for \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\" failed" error="failed to destroy network for sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:16.226271 kubelet[2518]: E0910 00:38:16.224006 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:16.226271 kubelet[2518]: E0910 00:38:16.224077 2518 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f"} Sep 10 00:38:16.226271 kubelet[2518]: E0910 00:38:16.224133 2518 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c002682-d6bc-423d-b99d-e4ec01f48f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:38:16.226271 kubelet[2518]: E0910 00:38:16.224163 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c002682-d6bc-423d-b99d-e4ec01f48f3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kjzlx" podUID="9c002682-d6bc-423d-b99d-e4ec01f48f3d" Sep 10 00:38:16.228383 containerd[1471]: time="2025-09-10T00:38:16.228334709Z" level=error msg="StopPodSandbox for \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\" failed" error="failed to destroy network for sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:16.228598 kubelet[2518]: E0910 00:38:16.228562 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:16.228644 kubelet[2518]: E0910 00:38:16.228616 2518 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291"} Sep 10 00:38:16.228700 kubelet[2518]: E0910 00:38:16.228647 2518 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c78a26e-1b4d-439c-a913-c9a5704cad9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:38:16.228700 kubelet[2518]: E0910 00:38:16.228668 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c78a26e-1b4d-439c-a913-c9a5704cad9a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c7c97f695-66hjz" podUID="9c78a26e-1b4d-439c-a913-c9a5704cad9a" Sep 10 00:38:16.229957 containerd[1471]: time="2025-09-10T00:38:16.229918209Z" level=error msg="StopPodSandbox for \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\" failed" error="failed to destroy network for sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:16.230111 kubelet[2518]: E0910 00:38:16.230078 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:16.230174 kubelet[2518]: E0910 00:38:16.230131 2518 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4"} Sep 10 00:38:16.230212 kubelet[2518]: E0910 00:38:16.230169 2518 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3bc944ca-86e6-4492-bde5-0808ef5e617e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:38:16.230265 kubelet[2518]: E0910 00:38:16.230212 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3bc944ca-86e6-4492-bde5-0808ef5e617e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c6cb89779-d447p" podUID="3bc944ca-86e6-4492-bde5-0808ef5e617e" Sep 10 00:38:16.231996 containerd[1471]: time="2025-09-10T00:38:16.231963525Z" level=error msg="StopPodSandbox for \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\" failed" error="failed to destroy network for sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:16.232134 kubelet[2518]: E0910 00:38:16.232091 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:16.232200 kubelet[2518]: E0910 00:38:16.232143 2518 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7"} Sep 10 00:38:16.232200 kubelet[2518]: E0910 00:38:16.232172 2518 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30f4ddb7-eda4-4f91-9889-caa4c6fe0752\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:38:16.232292 kubelet[2518]: E0910 00:38:16.232189 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30f4ddb7-eda4-4f91-9889-caa4c6fe0752\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-bwmkx" podUID="30f4ddb7-eda4-4f91-9889-caa4c6fe0752" Sep 10 00:38:16.787940 systemd[1]: Created slice kubepods-besteffort-pod2025065d_06f1_4598_a92c_46630a2af417.slice - libcontainer container kubepods-besteffort-pod2025065d_06f1_4598_a92c_46630a2af417.slice. Sep 10 00:38:16.790992 containerd[1471]: time="2025-09-10T00:38:16.790953547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2p7zs,Uid:2025065d-06f1-4598-a92c-46630a2af417,Namespace:calico-system,Attempt:0,}" Sep 10 00:38:16.858228 containerd[1471]: time="2025-09-10T00:38:16.858159318Z" level=error msg="Failed to destroy network for sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:16.858640 containerd[1471]: time="2025-09-10T00:38:16.858608317Z" level=error msg="encountered an error cleaning up failed sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:16.858682 containerd[1471]: time="2025-09-10T00:38:16.858658828Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2p7zs,Uid:2025065d-06f1-4598-a92c-46630a2af417,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:16.858954 kubelet[2518]: E0910 00:38:16.858889 2518 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:16.858954 kubelet[2518]: E0910 00:38:16.858966 2518 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2p7zs" Sep 10 00:38:16.859577 kubelet[2518]: E0910 00:38:16.858989 2518 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2p7zs" Sep 10 00:38:16.859577 kubelet[2518]: E0910 00:38:16.859045 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2p7zs_calico-system(2025065d-06f1-4598-a92c-46630a2af417)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2p7zs_calico-system(2025065d-06f1-4598-a92c-46630a2af417)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2p7zs" podUID="2025065d-06f1-4598-a92c-46630a2af417" Sep 10 00:38:16.861687 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4-shm.mount: Deactivated successfully. Sep 10 00:38:17.150099 kubelet[2518]: I0910 00:38:17.150058 2518 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:17.150937 containerd[1471]: time="2025-09-10T00:38:17.150896278Z" level=info msg="StopPodSandbox for \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\"" Sep 10 00:38:17.151132 containerd[1471]: time="2025-09-10T00:38:17.151091588Z" level=info msg="Ensure that sandbox 77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4 in task-service has been cleanup successfully" Sep 10 00:38:17.177007 containerd[1471]: time="2025-09-10T00:38:17.176936760Z" level=error msg="StopPodSandbox for \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\" failed" error="failed to destroy network for sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:38:17.177298 kubelet[2518]: E0910 00:38:17.177246 2518 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:17.177343 kubelet[2518]: E0910 00:38:17.177315 2518 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4"} Sep 10 00:38:17.177388 kubelet[2518]: E0910 00:38:17.177366 2518 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2025065d-06f1-4598-a92c-46630a2af417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:38:17.177470 kubelet[2518]: E0910 00:38:17.177404 2518 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2025065d-06f1-4598-a92c-46630a2af417\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2p7zs" podUID="2025065d-06f1-4598-a92c-46630a2af417" Sep 10 00:38:20.777977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766585344.mount: Deactivated successfully. Sep 10 00:38:21.923638 containerd[1471]: time="2025-09-10T00:38:21.923552181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:21.925157 containerd[1471]: time="2025-09-10T00:38:21.925101307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 10 00:38:21.926566 containerd[1471]: time="2025-09-10T00:38:21.926520926Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:21.929205 containerd[1471]: time="2025-09-10T00:38:21.929169535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:21.929931 containerd[1471]: time="2025-09-10T00:38:21.929887982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 5.81769941s" Sep 10 00:38:21.929980 containerd[1471]: time="2025-09-10T00:38:21.929932501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 10 00:38:21.941707 containerd[1471]: time="2025-09-10T00:38:21.941593447Z" level=info msg="CreateContainer within sandbox \"3850f2d1d47fcb6488735af4452c6d1c27b5ec556f39afd22a1a0b11d95640a9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 00:38:21.983913 containerd[1471]: time="2025-09-10T00:38:21.983848436Z" level=info msg="CreateContainer within sandbox \"3850f2d1d47fcb6488735af4452c6d1c27b5ec556f39afd22a1a0b11d95640a9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"84bd4e79f0110dac8bb3e510b2210f598c6635c03910fc9d934e0784a2d5bc01\"" Sep 10 00:38:21.984486 containerd[1471]: time="2025-09-10T00:38:21.984457016Z" level=info msg="StartContainer for \"84bd4e79f0110dac8bb3e510b2210f598c6635c03910fc9d934e0784a2d5bc01\"" Sep 10 00:38:22.037329 systemd[1]: Started cri-containerd-84bd4e79f0110dac8bb3e510b2210f598c6635c03910fc9d934e0784a2d5bc01.scope - libcontainer container 84bd4e79f0110dac8bb3e510b2210f598c6635c03910fc9d934e0784a2d5bc01. Sep 10 00:38:22.080724 containerd[1471]: time="2025-09-10T00:38:22.080672193Z" level=info msg="StartContainer for \"84bd4e79f0110dac8bb3e510b2210f598c6635c03910fc9d934e0784a2d5bc01\" returns successfully" Sep 10 00:38:22.211365 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 00:38:22.211559 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 00:38:23.438379 kubelet[2518]: I0910 00:38:23.438250 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n9fw5" podStartSLOduration=3.122888694 podStartE2EDuration="20.43822473s" podCreationTimestamp="2025-09-10 00:38:03 +0000 UTC" firstStartedPulling="2025-09-10 00:38:04.61526395 +0000 UTC m=+19.927478639" lastFinishedPulling="2025-09-10 00:38:21.930599986 +0000 UTC m=+37.242814675" observedRunningTime="2025-09-10 00:38:22.320794563 +0000 UTC m=+37.633009253" watchObservedRunningTime="2025-09-10 00:38:23.43822473 +0000 UTC m=+38.750439419" Sep 10 00:38:23.439580 containerd[1471]: time="2025-09-10T00:38:23.439326641Z" level=info msg="StopPodSandbox for \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\"" Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:23.572 [INFO][3883] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:23.573 [INFO][3883] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" iface="eth0" netns="/var/run/netns/cni-8bd605fc-da81-cdf0-b600-ada94ec37b54" Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:23.573 [INFO][3883] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" iface="eth0" netns="/var/run/netns/cni-8bd605fc-da81-cdf0-b600-ada94ec37b54" Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:23.574 [INFO][3883] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" iface="eth0" netns="/var/run/netns/cni-8bd605fc-da81-cdf0-b600-ada94ec37b54" Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:23.574 [INFO][3883] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:23.574 [INFO][3883] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:24.421 [INFO][3891] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" HandleID="k8s-pod-network.2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Workload="localhost-k8s-whisker--678754df47--sfdv9-eth0" Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:24.422 [INFO][3891] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:24.422 [INFO][3891] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:24.428 [WARNING][3891] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" HandleID="k8s-pod-network.2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Workload="localhost-k8s-whisker--678754df47--sfdv9-eth0" Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:24.428 [INFO][3891] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" HandleID="k8s-pod-network.2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Workload="localhost-k8s-whisker--678754df47--sfdv9-eth0" Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:24.430 [INFO][3891] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:24.436275 containerd[1471]: 2025-09-10 00:38:24.433 [INFO][3883] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:24.436834 containerd[1471]: time="2025-09-10T00:38:24.436475104Z" level=info msg="TearDown network for sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\" successfully" Sep 10 00:38:24.436834 containerd[1471]: time="2025-09-10T00:38:24.436509312Z" level=info msg="StopPodSandbox for \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\" returns successfully" Sep 10 00:38:24.439901 systemd[1]: run-netns-cni\x2d8bd605fc\x2dda81\x2dcdf0\x2db600\x2dada94ec37b54.mount: Deactivated successfully. Sep 10 00:38:24.543944 kubelet[2518]: I0910 00:38:24.543836 2518 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e61da1d8-2e2d-4754-b234-25357c6e33b4-whisker-ca-bundle\") pod \"e61da1d8-2e2d-4754-b234-25357c6e33b4\" (UID: \"e61da1d8-2e2d-4754-b234-25357c6e33b4\") " Sep 10 00:38:24.543944 kubelet[2518]: I0910 00:38:24.543923 2518 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs4gc\" (UniqueName: \"kubernetes.io/projected/e61da1d8-2e2d-4754-b234-25357c6e33b4-kube-api-access-gs4gc\") pod \"e61da1d8-2e2d-4754-b234-25357c6e33b4\" (UID: \"e61da1d8-2e2d-4754-b234-25357c6e33b4\") " Sep 10 00:38:24.543944 kubelet[2518]: I0910 00:38:24.543952 2518 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e61da1d8-2e2d-4754-b234-25357c6e33b4-whisker-backend-key-pair\") pod \"e61da1d8-2e2d-4754-b234-25357c6e33b4\" (UID: \"e61da1d8-2e2d-4754-b234-25357c6e33b4\") " Sep 10 00:38:24.544601 kubelet[2518]: I0910 00:38:24.544517 2518 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e61da1d8-2e2d-4754-b234-25357c6e33b4-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e61da1d8-2e2d-4754-b234-25357c6e33b4" (UID: "e61da1d8-2e2d-4754-b234-25357c6e33b4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 00:38:24.548801 kubelet[2518]: I0910 00:38:24.548730 2518 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e61da1d8-2e2d-4754-b234-25357c6e33b4-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e61da1d8-2e2d-4754-b234-25357c6e33b4" (UID: "e61da1d8-2e2d-4754-b234-25357c6e33b4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 00:38:24.548930 kubelet[2518]: I0910 00:38:24.548841 2518 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61da1d8-2e2d-4754-b234-25357c6e33b4-kube-api-access-gs4gc" (OuterVolumeSpecName: "kube-api-access-gs4gc") pod "e61da1d8-2e2d-4754-b234-25357c6e33b4" (UID: "e61da1d8-2e2d-4754-b234-25357c6e33b4"). InnerVolumeSpecName "kube-api-access-gs4gc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:38:24.551027 systemd[1]: var-lib-kubelet-pods-e61da1d8\x2d2e2d\x2d4754\x2db234\x2d25357c6e33b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgs4gc.mount: Deactivated successfully. Sep 10 00:38:24.551195 systemd[1]: var-lib-kubelet-pods-e61da1d8\x2d2e2d\x2d4754\x2db234\x2d25357c6e33b4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 00:38:24.645367 kubelet[2518]: I0910 00:38:24.645284 2518 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e61da1d8-2e2d-4754-b234-25357c6e33b4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 00:38:24.645367 kubelet[2518]: I0910 00:38:24.645342 2518 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e61da1d8-2e2d-4754-b234-25357c6e33b4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 00:38:24.645367 kubelet[2518]: I0910 00:38:24.645351 2518 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gs4gc\" (UniqueName: \"kubernetes.io/projected/e61da1d8-2e2d-4754-b234-25357c6e33b4-kube-api-access-gs4gc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:38:24.790004 systemd[1]: Removed slice kubepods-besteffort-pode61da1d8_2e2d_4754_b234_25357c6e33b4.slice - libcontainer container kubepods-besteffort-pode61da1d8_2e2d_4754_b234_25357c6e33b4.slice. Sep 10 00:38:25.189399 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:49214.service - OpenSSH per-connection server daemon (10.0.0.1:49214). Sep 10 00:38:25.292332 sshd[4007]: Accepted publickey for core from 10.0.0.1 port 49214 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:38:25.298240 systemd[1]: Created slice kubepods-besteffort-pod34a44933_d77d_4912_a078_3cccfdb7879f.slice - libcontainer container kubepods-besteffort-pod34a44933_d77d_4912_a078_3cccfdb7879f.slice. Sep 10 00:38:25.300180 sshd[4007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:38:25.309237 systemd-logind[1448]: New session 8 of user core. Sep 10 00:38:25.314532 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 00:38:25.353463 kubelet[2518]: I0910 00:38:25.353384 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjstg\" (UniqueName: \"kubernetes.io/projected/34a44933-d77d-4912-a078-3cccfdb7879f-kube-api-access-zjstg\") pod \"whisker-6bb5dbd9c4-mldfs\" (UID: \"34a44933-d77d-4912-a078-3cccfdb7879f\") " pod="calico-system/whisker-6bb5dbd9c4-mldfs" Sep 10 00:38:25.353463 kubelet[2518]: I0910 00:38:25.353461 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34a44933-d77d-4912-a078-3cccfdb7879f-whisker-ca-bundle\") pod \"whisker-6bb5dbd9c4-mldfs\" (UID: \"34a44933-d77d-4912-a078-3cccfdb7879f\") " pod="calico-system/whisker-6bb5dbd9c4-mldfs" Sep 10 00:38:25.353463 kubelet[2518]: I0910 00:38:25.353484 2518 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34a44933-d77d-4912-a078-3cccfdb7879f-whisker-backend-key-pair\") pod \"whisker-6bb5dbd9c4-mldfs\" (UID: \"34a44933-d77d-4912-a078-3cccfdb7879f\") " pod="calico-system/whisker-6bb5dbd9c4-mldfs" Sep 10 00:38:25.473022 sshd[4007]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:25.477434 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:49214.service: Deactivated successfully. Sep 10 00:38:25.479793 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:38:25.480491 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:38:25.481567 systemd-logind[1448]: Removed session 8. Sep 10 00:38:25.606246 containerd[1471]: time="2025-09-10T00:38:25.606172109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb5dbd9c4-mldfs,Uid:34a44933-d77d-4912-a078-3cccfdb7879f,Namespace:calico-system,Attempt:0,}" Sep 10 00:38:26.253843 systemd-networkd[1393]: calie0e3bbe22db: Link UP Sep 10 00:38:26.255358 systemd-networkd[1393]: calie0e3bbe22db: Gained carrier Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.159 [INFO][4037] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.172 [INFO][4037] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0 whisker-6bb5dbd9c4- calico-system 34a44933-d77d-4912-a078-3cccfdb7879f 933 0 2025-09-10 00:38:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bb5dbd9c4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6bb5dbd9c4-mldfs eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie0e3bbe22db [] [] }} ContainerID="0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" Namespace="calico-system" Pod="whisker-6bb5dbd9c4-mldfs" WorkloadEndpoint="localhost-k8s-whisker--6bb5dbd9c4--mldfs-" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.172 [INFO][4037] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" Namespace="calico-system" Pod="whisker-6bb5dbd9c4-mldfs" WorkloadEndpoint="localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.203 [INFO][4052] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" HandleID="k8s-pod-network.0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" Workload="localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.204 [INFO][4052] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" HandleID="k8s-pod-network.0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" Workload="localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6bb5dbd9c4-mldfs", "timestamp":"2025-09-10 00:38:26.203824747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.204 [INFO][4052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.204 [INFO][4052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.204 [INFO][4052] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.212 [INFO][4052] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" host="localhost" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.222 [INFO][4052] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.226 [INFO][4052] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.228 [INFO][4052] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.230 [INFO][4052] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.230 [INFO][4052] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" host="localhost" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.232 [INFO][4052] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.236 [INFO][4052] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" host="localhost" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.240 [INFO][4052] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" host="localhost" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.240 [INFO][4052] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" host="localhost" Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.240 [INFO][4052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:26.275344 containerd[1471]: 2025-09-10 00:38:26.240 [INFO][4052] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" HandleID="k8s-pod-network.0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" Workload="localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0" Sep 10 00:38:26.275932 containerd[1471]: 2025-09-10 00:38:26.244 [INFO][4037] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" Namespace="calico-system" Pod="whisker-6bb5dbd9c4-mldfs" WorkloadEndpoint="localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0", GenerateName:"whisker-6bb5dbd9c4-", Namespace:"calico-system", SelfLink:"", UID:"34a44933-d77d-4912-a078-3cccfdb7879f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bb5dbd9c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6bb5dbd9c4-mldfs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie0e3bbe22db", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:26.275932 containerd[1471]: 2025-09-10 00:38:26.244 [INFO][4037] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" Namespace="calico-system" Pod="whisker-6bb5dbd9c4-mldfs" WorkloadEndpoint="localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0" Sep 10 00:38:26.275932 containerd[1471]: 2025-09-10 00:38:26.244 [INFO][4037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0e3bbe22db ContainerID="0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" Namespace="calico-system" Pod="whisker-6bb5dbd9c4-mldfs" WorkloadEndpoint="localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0" Sep 10 00:38:26.275932 containerd[1471]: 2025-09-10 00:38:26.254 [INFO][4037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" Namespace="calico-system" Pod="whisker-6bb5dbd9c4-mldfs" WorkloadEndpoint="localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0" Sep 10 00:38:26.275932 containerd[1471]: 2025-09-10 00:38:26.256 [INFO][4037] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" Namespace="calico-system" Pod="whisker-6bb5dbd9c4-mldfs" WorkloadEndpoint="localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0", GenerateName:"whisker-6bb5dbd9c4-", Namespace:"calico-system", SelfLink:"", UID:"34a44933-d77d-4912-a078-3cccfdb7879f", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bb5dbd9c4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba", Pod:"whisker-6bb5dbd9c4-mldfs", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie0e3bbe22db", MAC:"a6:ee:63:19:6a:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:26.275932 containerd[1471]: 2025-09-10 00:38:26.268 [INFO][4037] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba" Namespace="calico-system" Pod="whisker-6bb5dbd9c4-mldfs" WorkloadEndpoint="localhost-k8s-whisker--6bb5dbd9c4--mldfs-eth0" Sep 10 00:38:26.312010 containerd[1471]: time="2025-09-10T00:38:26.311375571Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:26.312416 containerd[1471]: time="2025-09-10T00:38:26.312036554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:26.312416 containerd[1471]: time="2025-09-10T00:38:26.312148746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:26.312925 containerd[1471]: time="2025-09-10T00:38:26.312740112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:26.338766 systemd[1]: Started cri-containerd-0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba.scope - libcontainer container 0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba. Sep 10 00:38:26.359743 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:38:26.399174 containerd[1471]: time="2025-09-10T00:38:26.399097681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bb5dbd9c4-mldfs,Uid:34a44933-d77d-4912-a078-3cccfdb7879f,Namespace:calico-system,Attempt:0,} returns sandbox id \"0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba\"" Sep 10 00:38:26.402056 containerd[1471]: time="2025-09-10T00:38:26.402014624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 00:38:26.781943 containerd[1471]: time="2025-09-10T00:38:26.781798649Z" level=info msg="StopPodSandbox for \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\"" Sep 10 00:38:26.784452 kubelet[2518]: I0910 00:38:26.784395 2518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61da1d8-2e2d-4754-b234-25357c6e33b4" path="/var/lib/kubelet/pods/e61da1d8-2e2d-4754-b234-25357c6e33b4/volumes" Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:26.882 [INFO][4142] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:26.886 [INFO][4142] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" iface="eth0" netns="/var/run/netns/cni-a2faa772-b822-855a-1582-a2177a9950ab" Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:26.887 [INFO][4142] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" iface="eth0" netns="/var/run/netns/cni-a2faa772-b822-855a-1582-a2177a9950ab" Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:26.887 [INFO][4142] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" iface="eth0" netns="/var/run/netns/cni-a2faa772-b822-855a-1582-a2177a9950ab" Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:26.887 [INFO][4142] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:26.887 [INFO][4142] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:26.997 [INFO][4151] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" HandleID="k8s-pod-network.d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:26.997 [INFO][4151] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:26.997 [INFO][4151] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:27.086 [WARNING][4151] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" HandleID="k8s-pod-network.d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:27.087 [INFO][4151] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" HandleID="k8s-pod-network.d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:27.088 [INFO][4151] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:27.097095 containerd[1471]: 2025-09-10 00:38:27.093 [INFO][4142] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:27.098001 containerd[1471]: time="2025-09-10T00:38:27.097933645Z" level=info msg="TearDown network for sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\" successfully" Sep 10 00:38:27.098001 containerd[1471]: time="2025-09-10T00:38:27.097977671Z" level=info msg="StopPodSandbox for \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\" returns successfully" Sep 10 00:38:27.098519 kubelet[2518]: E0910 00:38:27.098479 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:27.099903 containerd[1471]: time="2025-09-10T00:38:27.099251219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kjzlx,Uid:9c002682-d6bc-423d-b99d-e4ec01f48f3d,Namespace:kube-system,Attempt:1,}" Sep 10 00:38:27.102303 systemd[1]: run-netns-cni\x2da2faa772\x2db822\x2d855a\x2d1582\x2da2177a9950ab.mount: Deactivated successfully. Sep 10 00:38:27.385260 systemd-networkd[1393]: cali9ef438801d4: Link UP Sep 10 00:38:27.392623 systemd-networkd[1393]: cali9ef438801d4: Gained carrier Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.273 [INFO][4161] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.296 [INFO][4161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0 coredns-668d6bf9bc- kube-system 9c002682-d6bc-423d-b99d-e4ec01f48f3d 947 0 2025-09-10 00:37:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-kjzlx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9ef438801d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzlx-" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.296 [INFO][4161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.332 [INFO][4173] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" HandleID="k8s-pod-network.fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.332 [INFO][4173] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" HandleID="k8s-pod-network.fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af420), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-kjzlx", "timestamp":"2025-09-10 00:38:27.332526643 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.332 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.332 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.332 [INFO][4173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.340 [INFO][4173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" host="localhost" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.349 [INFO][4173] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.354 [INFO][4173] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.356 [INFO][4173] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.359 [INFO][4173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.359 [INFO][4173] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" host="localhost" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.361 [INFO][4173] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.369 [INFO][4173] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" host="localhost" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.377 [INFO][4173] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" host="localhost" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.377 [INFO][4173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" host="localhost" Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.377 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:27.415553 containerd[1471]: 2025-09-10 00:38:27.377 [INFO][4173] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" HandleID="k8s-pod-network.fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:27.416284 containerd[1471]: 2025-09-10 00:38:27.381 [INFO][4161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c002682-d6bc-423d-b99d-e4ec01f48f3d", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-kjzlx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ef438801d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:27.416284 containerd[1471]: 2025-09-10 00:38:27.381 [INFO][4161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:27.416284 containerd[1471]: 2025-09-10 00:38:27.381 [INFO][4161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ef438801d4 ContainerID="fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:27.416284 containerd[1471]: 2025-09-10 00:38:27.388 [INFO][4161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:27.416284 containerd[1471]: 2025-09-10 00:38:27.389 [INFO][4161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c002682-d6bc-423d-b99d-e4ec01f48f3d", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c", Pod:"coredns-668d6bf9bc-kjzlx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ef438801d4", MAC:"86:27:ba:b3:40:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:27.416284 containerd[1471]: 2025-09-10 00:38:27.402 [INFO][4161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c" Namespace="kube-system" Pod="coredns-668d6bf9bc-kjzlx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:27.440049 containerd[1471]: time="2025-09-10T00:38:27.439860149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:27.440377 containerd[1471]: time="2025-09-10T00:38:27.440012699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:27.440377 containerd[1471]: time="2025-09-10T00:38:27.440047418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:27.440377 containerd[1471]: time="2025-09-10T00:38:27.440196943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:27.475616 systemd[1]: Started cri-containerd-fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c.scope - libcontainer container fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c. Sep 10 00:38:27.520766 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:38:27.612980 containerd[1471]: time="2025-09-10T00:38:27.612888844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kjzlx,Uid:9c002682-d6bc-423d-b99d-e4ec01f48f3d,Namespace:kube-system,Attempt:1,} returns sandbox id \"fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c\"" Sep 10 00:38:27.616050 kubelet[2518]: E0910 00:38:27.615593 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:27.618860 systemd-networkd[1393]: calie0e3bbe22db: Gained IPv6LL Sep 10 00:38:27.619639 containerd[1471]: time="2025-09-10T00:38:27.618870615Z" level=info msg="CreateContainer within sandbox \"fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:38:27.728653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount888173131.mount: Deactivated successfully. Sep 10 00:38:27.732613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount232862661.mount: Deactivated successfully. Sep 10 00:38:27.734576 containerd[1471]: time="2025-09-10T00:38:27.734503307Z" level=info msg="CreateContainer within sandbox \"fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ffc15eb59bed76be2da085d8a00402c6f71ae7a1c59602b1dd719c93d9cc4de\"" Sep 10 00:38:27.735381 containerd[1471]: time="2025-09-10T00:38:27.735332630Z" level=info msg="StartContainer for \"8ffc15eb59bed76be2da085d8a00402c6f71ae7a1c59602b1dd719c93d9cc4de\"" Sep 10 00:38:27.776356 systemd[1]: Started cri-containerd-8ffc15eb59bed76be2da085d8a00402c6f71ae7a1c59602b1dd719c93d9cc4de.scope - libcontainer container 8ffc15eb59bed76be2da085d8a00402c6f71ae7a1c59602b1dd719c93d9cc4de. Sep 10 00:38:27.781191 containerd[1471]: time="2025-09-10T00:38:27.781154848Z" level=info msg="StopPodSandbox for \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\"" Sep 10 00:38:28.021763 containerd[1471]: time="2025-09-10T00:38:28.021601606Z" level=info msg="StartContainer for \"8ffc15eb59bed76be2da085d8a00402c6f71ae7a1c59602b1dd719c93d9cc4de\" returns successfully" Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:27.995 [INFO][4284] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:27.996 [INFO][4284] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" iface="eth0" netns="/var/run/netns/cni-ed653fc0-b2ed-0c15-0329-3a8bfd12387f" Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:27.996 [INFO][4284] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" iface="eth0" netns="/var/run/netns/cni-ed653fc0-b2ed-0c15-0329-3a8bfd12387f" Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:27.996 [INFO][4284] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" iface="eth0" netns="/var/run/netns/cni-ed653fc0-b2ed-0c15-0329-3a8bfd12387f" Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:27.996 [INFO][4284] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:27.997 [INFO][4284] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:28.025 [INFO][4294] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" HandleID="k8s-pod-network.0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:28.026 [INFO][4294] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:28.026 [INFO][4294] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:28.032 [WARNING][4294] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" HandleID="k8s-pod-network.0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:28.032 [INFO][4294] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" HandleID="k8s-pod-network.0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:28.034 [INFO][4294] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:28.043973 containerd[1471]: 2025-09-10 00:38:28.039 [INFO][4284] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:28.044663 containerd[1471]: time="2025-09-10T00:38:28.044239958Z" level=info msg="TearDown network for sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\" successfully" Sep 10 00:38:28.044663 containerd[1471]: time="2025-09-10T00:38:28.044287061Z" level=info msg="StopPodSandbox for \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\" returns successfully" Sep 10 00:38:28.045294 containerd[1471]: time="2025-09-10T00:38:28.045264654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6cb89779-d447p,Uid:3bc944ca-86e6-4492-bde5-0808ef5e617e,Namespace:calico-system,Attempt:1,}" Sep 10 00:38:28.185505 kubelet[2518]: E0910 00:38:28.185462 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:28.464674 systemd[1]: run-netns-cni\x2ded653fc0\x2db2ed\x2d0c15\x2d0329\x2d3a8bfd12387f.mount: Deactivated successfully. Sep 10 00:38:28.600107 kubelet[2518]: I0910 00:38:28.598618 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kjzlx" podStartSLOduration=39.59858851 podStartE2EDuration="39.59858851s" podCreationTimestamp="2025-09-10 00:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:38:28.598429979 +0000 UTC m=+43.910644668" watchObservedRunningTime="2025-09-10 00:38:28.59858851 +0000 UTC m=+43.910803199" Sep 10 00:38:29.068217 systemd-networkd[1393]: cali4ec0679a738: Link UP Sep 10 00:38:29.069194 systemd-networkd[1393]: cali4ec0679a738: Gained carrier Sep 10 00:38:29.078417 containerd[1471]: time="2025-09-10T00:38:29.078355091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:29.080402 containerd[1471]: time="2025-09-10T00:38:29.080286816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.092 [INFO][4312] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.312 [INFO][4312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0 calico-kube-controllers-c6cb89779- calico-system 3bc944ca-86e6-4492-bde5-0808ef5e617e 959 0 2025-09-10 00:38:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c6cb89779 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-c6cb89779-d447p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4ec0679a738 [] [] }} ContainerID="f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" Namespace="calico-system" Pod="calico-kube-controllers-c6cb89779-d447p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.313 [INFO][4312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" Namespace="calico-system" Pod="calico-kube-controllers-c6cb89779-d447p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.688 [INFO][4344] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" HandleID="k8s-pod-network.f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.689 [INFO][4344] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" HandleID="k8s-pod-network.f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035efe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-c6cb89779-d447p", "timestamp":"2025-09-10 00:38:28.688544413 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.689 [INFO][4344] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.689 [INFO][4344] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.689 [INFO][4344] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.838 [INFO][4344] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" host="localhost" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.843 [INFO][4344] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.847 [INFO][4344] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.850 [INFO][4344] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.852 [INFO][4344] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.853 [INFO][4344] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" host="localhost" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.854 [INFO][4344] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:28.991 [INFO][4344] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" host="localhost" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:29.062 [INFO][4344] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" host="localhost" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:29.062 [INFO][4344] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" host="localhost" Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:29.062 [INFO][4344] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:29.086199 containerd[1471]: 2025-09-10 00:38:29.062 [INFO][4344] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" HandleID="k8s-pod-network.f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:29.089039 containerd[1471]: 2025-09-10 00:38:29.065 [INFO][4312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" Namespace="calico-system" Pod="calico-kube-controllers-c6cb89779-d447p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0", GenerateName:"calico-kube-controllers-c6cb89779-", Namespace:"calico-system", SelfLink:"", UID:"3bc944ca-86e6-4492-bde5-0808ef5e617e", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6cb89779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-c6cb89779-d447p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ec0679a738", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:29.089039 containerd[1471]: 2025-09-10 00:38:29.066 [INFO][4312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" Namespace="calico-system" Pod="calico-kube-controllers-c6cb89779-d447p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:29.089039 containerd[1471]: 2025-09-10 00:38:29.066 [INFO][4312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ec0679a738 ContainerID="f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" Namespace="calico-system" Pod="calico-kube-controllers-c6cb89779-d447p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:29.089039 containerd[1471]: 2025-09-10 00:38:29.068 [INFO][4312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" Namespace="calico-system" Pod="calico-kube-controllers-c6cb89779-d447p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:29.089039 containerd[1471]: 2025-09-10 00:38:29.069 [INFO][4312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" Namespace="calico-system" Pod="calico-kube-controllers-c6cb89779-d447p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0", GenerateName:"calico-kube-controllers-c6cb89779-", Namespace:"calico-system", SelfLink:"", UID:"3bc944ca-86e6-4492-bde5-0808ef5e617e", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6cb89779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d", Pod:"calico-kube-controllers-c6cb89779-d447p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ec0679a738", MAC:"f2:e9:da:2a:3c:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:29.089039 containerd[1471]: 2025-09-10 00:38:29.082 [INFO][4312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d" Namespace="calico-system" Pod="calico-kube-controllers-c6cb89779-d447p" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:29.089825 containerd[1471]: time="2025-09-10T00:38:29.089778565Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:29.090459 systemd-networkd[1393]: cali9ef438801d4: Gained IPv6LL Sep 10 00:38:29.099842 containerd[1471]: time="2025-09-10T00:38:29.098531512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:29.099842 containerd[1471]: time="2025-09-10T00:38:29.099155398Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.696864911s" Sep 10 00:38:29.099842 containerd[1471]: time="2025-09-10T00:38:29.099197942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 10 00:38:29.101749 containerd[1471]: time="2025-09-10T00:38:29.101709687Z" level=info msg="CreateContainer within sandbox \"0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 00:38:29.111874 containerd[1471]: time="2025-09-10T00:38:29.111735345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:29.111874 containerd[1471]: time="2025-09-10T00:38:29.111815522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:29.111874 containerd[1471]: time="2025-09-10T00:38:29.111830522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:29.112164 containerd[1471]: time="2025-09-10T00:38:29.111954746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:29.121186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount921381942.mount: Deactivated successfully. Sep 10 00:38:29.139274 systemd[1]: Started cri-containerd-f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d.scope - libcontainer container f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d. Sep 10 00:38:29.151993 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:38:29.176179 containerd[1471]: time="2025-09-10T00:38:29.176135365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c6cb89779-d447p,Uid:3bc944ca-86e6-4492-bde5-0808ef5e617e,Namespace:calico-system,Attempt:1,} returns sandbox id \"f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d\"" Sep 10 00:38:29.178239 containerd[1471]: time="2025-09-10T00:38:29.178171456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 00:38:29.187971 kubelet[2518]: E0910 00:38:29.187946 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:29.372668 containerd[1471]: time="2025-09-10T00:38:29.372609730Z" level=info msg="CreateContainer within sandbox \"0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f794aee45de76e589e8ce9ed7e7435f60d854f121d859a42f7f31773471e3b97\"" Sep 10 00:38:29.373323 containerd[1471]: time="2025-09-10T00:38:29.373276700Z" level=info msg="StartContainer for \"f794aee45de76e589e8ce9ed7e7435f60d854f121d859a42f7f31773471e3b97\"" Sep 10 00:38:29.416283 systemd[1]: Started cri-containerd-f794aee45de76e589e8ce9ed7e7435f60d854f121d859a42f7f31773471e3b97.scope - libcontainer container f794aee45de76e589e8ce9ed7e7435f60d854f121d859a42f7f31773471e3b97. Sep 10 00:38:29.463240 containerd[1471]: time="2025-09-10T00:38:29.463134991Z" level=info msg="StartContainer for \"f794aee45de76e589e8ce9ed7e7435f60d854f121d859a42f7f31773471e3b97\" returns successfully" Sep 10 00:38:29.780653 containerd[1471]: time="2025-09-10T00:38:29.780451762Z" level=info msg="StopPodSandbox for \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\"" Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:29.973 [INFO][4470] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:29.974 [INFO][4470] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" iface="eth0" netns="/var/run/netns/cni-4de74968-2631-1c64-87a4-b3d15691b1aa" Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:29.974 [INFO][4470] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" iface="eth0" netns="/var/run/netns/cni-4de74968-2631-1c64-87a4-b3d15691b1aa" Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:29.975 [INFO][4470] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" iface="eth0" netns="/var/run/netns/cni-4de74968-2631-1c64-87a4-b3d15691b1aa" Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:29.975 [INFO][4470] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:29.975 [INFO][4470] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:30.003 [INFO][4501] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" HandleID="k8s-pod-network.fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:30.003 [INFO][4501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:30.003 [INFO][4501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:30.008 [WARNING][4501] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" HandleID="k8s-pod-network.fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:30.009 [INFO][4501] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" HandleID="k8s-pod-network.fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:30.010 [INFO][4501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:30.017215 containerd[1471]: 2025-09-10 00:38:30.014 [INFO][4470] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:30.017675 containerd[1471]: time="2025-09-10T00:38:30.017430269Z" level=info msg="TearDown network for sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\" successfully" Sep 10 00:38:30.017675 containerd[1471]: time="2025-09-10T00:38:30.017475859Z" level=info msg="StopPodSandbox for \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\" returns successfully" Sep 10 00:38:30.020557 containerd[1471]: time="2025-09-10T00:38:30.020503721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7c97f695-66hjz,Uid:9c78a26e-1b4d-439c-a913-c9a5704cad9a,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:38:30.021217 systemd[1]: run-netns-cni\x2d4de74968\x2d2631\x2d1c64\x2d87a4\x2db3d15691b1aa.mount: Deactivated successfully. Sep 10 00:38:30.159052 systemd-networkd[1393]: cali2e4bb2e0449: Link UP Sep 10 00:38:30.159301 systemd-networkd[1393]: cali2e4bb2e0449: Gained carrier Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.066 [INFO][4510] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.079 [INFO][4510] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0 calico-apiserver-6c7c97f695- calico-apiserver 9c78a26e-1b4d-439c-a913-c9a5704cad9a 983 0 2025-09-10 00:38:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c7c97f695 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c7c97f695-66hjz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2e4bb2e0449 [] [] }} ContainerID="3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-66hjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.079 [INFO][4510] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-66hjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.117 [INFO][4524] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" HandleID="k8s-pod-network.3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.117 [INFO][4524] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" HandleID="k8s-pod-network.3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013ae70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c7c97f695-66hjz", "timestamp":"2025-09-10 00:38:30.117214485 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.117 [INFO][4524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.117 [INFO][4524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.117 [INFO][4524] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.124 [INFO][4524] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" host="localhost" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.129 [INFO][4524] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.134 [INFO][4524] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.136 [INFO][4524] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.139 [INFO][4524] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.139 [INFO][4524] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" host="localhost" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.141 [INFO][4524] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204 Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.146 [INFO][4524] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" host="localhost" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.151 [INFO][4524] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" host="localhost" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.151 [INFO][4524] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" host="localhost" Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.151 [INFO][4524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:30.174524 containerd[1471]: 2025-09-10 00:38:30.151 [INFO][4524] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" HandleID="k8s-pod-network.3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:30.175738 containerd[1471]: 2025-09-10 00:38:30.155 [INFO][4510] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-66hjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0", GenerateName:"calico-apiserver-6c7c97f695-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c78a26e-1b4d-439c-a913-c9a5704cad9a", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7c97f695", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c7c97f695-66hjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e4bb2e0449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:30.175738 containerd[1471]: 2025-09-10 00:38:30.155 [INFO][4510] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-66hjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:30.175738 containerd[1471]: 2025-09-10 00:38:30.155 [INFO][4510] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e4bb2e0449 ContainerID="3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-66hjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:30.175738 containerd[1471]: 2025-09-10 00:38:30.158 [INFO][4510] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-66hjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:30.175738 containerd[1471]: 2025-09-10 00:38:30.158 [INFO][4510] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-66hjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0", GenerateName:"calico-apiserver-6c7c97f695-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c78a26e-1b4d-439c-a913-c9a5704cad9a", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7c97f695", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204", Pod:"calico-apiserver-6c7c97f695-66hjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e4bb2e0449", MAC:"1e:ce:10:92:de:de", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:30.175738 containerd[1471]: 2025-09-10 00:38:30.170 [INFO][4510] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-66hjz" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:30.199208 containerd[1471]: time="2025-09-10T00:38:30.199005728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:30.199208 containerd[1471]: time="2025-09-10T00:38:30.199111777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:30.199208 containerd[1471]: time="2025-09-10T00:38:30.199154811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:30.199511 containerd[1471]: time="2025-09-10T00:38:30.199256030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:30.218317 systemd[1]: Started cri-containerd-3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204.scope - libcontainer container 3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204. Sep 10 00:38:30.232361 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:38:30.256962 containerd[1471]: time="2025-09-10T00:38:30.256908763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7c97f695-66hjz,Uid:9c78a26e-1b4d-439c-a913-c9a5704cad9a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204\"" Sep 10 00:38:30.489076 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:49082.service - OpenSSH per-connection server daemon (10.0.0.1:49082). Sep 10 00:38:30.546747 sshd[4581]: Accepted publickey for core from 10.0.0.1 port 49082 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:38:30.550207 sshd[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:38:30.564504 systemd-logind[1448]: New session 9 of user core. Sep 10 00:38:30.572353 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 00:38:30.751442 sshd[4581]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:30.756676 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:49082.service: Deactivated successfully. Sep 10 00:38:30.759797 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:38:30.760985 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:38:30.762045 systemd-logind[1448]: Removed session 9. Sep 10 00:38:30.783571 containerd[1471]: time="2025-09-10T00:38:30.783494489Z" level=info msg="StopPodSandbox for \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\"" Sep 10 00:38:30.785261 containerd[1471]: time="2025-09-10T00:38:30.785214694Z" level=info msg="StopPodSandbox for \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\"" Sep 10 00:38:30.785626 containerd[1471]: time="2025-09-10T00:38:30.785560292Z" level=info msg="StopPodSandbox for \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\"" Sep 10 00:38:31.010456 systemd-networkd[1393]: cali4ec0679a738: Gained IPv6LL Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:30.951 [INFO][4627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:30.951 [INFO][4627] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" iface="eth0" netns="/var/run/netns/cni-d1354c4d-f063-3554-38f4-3ae9af2ff56e" Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:30.952 [INFO][4627] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" iface="eth0" netns="/var/run/netns/cni-d1354c4d-f063-3554-38f4-3ae9af2ff56e" Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:30.952 [INFO][4627] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" iface="eth0" netns="/var/run/netns/cni-d1354c4d-f063-3554-38f4-3ae9af2ff56e" Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:30.952 [INFO][4627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:30.953 [INFO][4627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:31.018 [INFO][4652] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" HandleID="k8s-pod-network.f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:31.018 [INFO][4652] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:31.018 [INFO][4652] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:31.027 [WARNING][4652] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" HandleID="k8s-pod-network.f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:31.028 [INFO][4652] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" HandleID="k8s-pod-network.f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:31.030 [INFO][4652] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:31.054157 containerd[1471]: 2025-09-10 00:38:31.042 [INFO][4627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:31.058277 containerd[1471]: time="2025-09-10T00:38:31.057775536Z" level=info msg="TearDown network for sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\" successfully" Sep 10 00:38:31.058277 containerd[1471]: time="2025-09-10T00:38:31.057817458Z" level=info msg="StopPodSandbox for \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\" returns successfully" Sep 10 00:38:31.058533 kubelet[2518]: E0910 00:38:31.058428 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:31.063722 containerd[1471]: time="2025-09-10T00:38:31.059343270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lvpsb,Uid:ac5d8d6b-a420-4114-bc22-8d86a77072d3,Namespace:kube-system,Attempt:1,}" Sep 10 00:38:31.060500 systemd[1]: run-netns-cni\x2dd1354c4d\x2df063\x2d3554\x2d38f4\x2d3ae9af2ff56e.mount: Deactivated successfully. Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.958 [INFO][4629] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.960 [INFO][4629] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" iface="eth0" netns="/var/run/netns/cni-c469f994-63ed-2986-84b5-b1dd9b8d33a1" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.961 [INFO][4629] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" iface="eth0" netns="/var/run/netns/cni-c469f994-63ed-2986-84b5-b1dd9b8d33a1" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.961 [INFO][4629] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" iface="eth0" netns="/var/run/netns/cni-c469f994-63ed-2986-84b5-b1dd9b8d33a1" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.961 [INFO][4629] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.961 [INFO][4629] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.026 [INFO][4662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" HandleID="k8s-pod-network.c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.026 [INFO][4662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.030 [INFO][4662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.036 [WARNING][4662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" HandleID="k8s-pod-network.c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.037 [INFO][4662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" HandleID="k8s-pod-network.c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.039 [INFO][4662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.053 [INFO][4629] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:31.147324 containerd[1471]: time="2025-09-10T00:38:31.067904612Z" level=info msg="TearDown network for sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\" successfully" Sep 10 00:38:31.147324 containerd[1471]: time="2025-09-10T00:38:31.067933649Z" level=info msg="StopPodSandbox for \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\" returns successfully" Sep 10 00:38:31.147324 containerd[1471]: time="2025-09-10T00:38:31.068727775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bwmkx,Uid:30f4ddb7-eda4-4f91-9889-caa4c6fe0752,Namespace:calico-system,Attempt:1,}" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.977 [INFO][4628] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.977 [INFO][4628] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" iface="eth0" netns="/var/run/netns/cni-9759e8d8-78d5-6154-a006-cb2023dbd345" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.978 [INFO][4628] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" iface="eth0" netns="/var/run/netns/cni-9759e8d8-78d5-6154-a006-cb2023dbd345" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.978 [INFO][4628] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" iface="eth0" netns="/var/run/netns/cni-9759e8d8-78d5-6154-a006-cb2023dbd345" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.978 [INFO][4628] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:30.979 [INFO][4628] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.074 [INFO][4669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" HandleID="k8s-pod-network.03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.075 [INFO][4669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.076 [INFO][4669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.084 [WARNING][4669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" HandleID="k8s-pod-network.03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.084 [INFO][4669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" HandleID="k8s-pod-network.03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.089 [INFO][4669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:31.147324 containerd[1471]: 2025-09-10 00:38:31.093 [INFO][4628] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:31.073137 systemd[1]: run-netns-cni\x2dc469f994\x2d63ed\x2d2986\x2d84b5\x2db1dd9b8d33a1.mount: Deactivated successfully. Sep 10 00:38:31.148092 containerd[1471]: time="2025-09-10T00:38:31.102269797Z" level=info msg="TearDown network for sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\" successfully" Sep 10 00:38:31.148092 containerd[1471]: time="2025-09-10T00:38:31.102307761Z" level=info msg="StopPodSandbox for \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\" returns successfully" Sep 10 00:38:31.148092 containerd[1471]: time="2025-09-10T00:38:31.112163592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7c97f695-rjghd,Uid:90f17e30-9dec-4e54-8e40-8da4f9ce8c2b,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:38:31.106576 systemd[1]: run-netns-cni\x2d9759e8d8\x2d78d5\x2d6154\x2da006\x2dcb2023dbd345.mount: Deactivated successfully. Sep 10 00:38:31.309808 kubelet[2518]: I0910 00:38:31.308259 2518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:38:31.309808 kubelet[2518]: E0910 00:38:31.308669 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:31.523483 systemd-networkd[1393]: cali2e4bb2e0449: Gained IPv6LL Sep 10 00:38:31.659956 systemd-networkd[1393]: calieaa939c55d3: Link UP Sep 10 00:38:31.660346 systemd-networkd[1393]: calieaa939c55d3: Gained carrier Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.545 [INFO][4716] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.556 [INFO][4716] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0 coredns-668d6bf9bc- kube-system ac5d8d6b-a420-4114-bc22-8d86a77072d3 1000 0 2025-09-10 00:37:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-lvpsb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieaa939c55d3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvpsb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lvpsb-" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.556 [INFO][4716] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvpsb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.602 [INFO][4750] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" HandleID="k8s-pod-network.b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.602 [INFO][4750] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" HandleID="k8s-pod-network.b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a7160), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-lvpsb", "timestamp":"2025-09-10 00:38:31.60222526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.602 [INFO][4750] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.602 [INFO][4750] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.602 [INFO][4750] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.609 [INFO][4750] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" host="localhost" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.614 [INFO][4750] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.618 [INFO][4750] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.622 [INFO][4750] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.626 [INFO][4750] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.626 [INFO][4750] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" host="localhost" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.627 [INFO][4750] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2 Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.643 [INFO][4750] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" host="localhost" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.652 [INFO][4750] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" host="localhost" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.652 [INFO][4750] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" host="localhost" Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.652 [INFO][4750] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:31.679683 containerd[1471]: 2025-09-10 00:38:31.652 [INFO][4750] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" HandleID="k8s-pod-network.b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:31.680914 containerd[1471]: 2025-09-10 00:38:31.655 [INFO][4716] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvpsb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac5d8d6b-a420-4114-bc22-8d86a77072d3", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-lvpsb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaa939c55d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:31.680914 containerd[1471]: 2025-09-10 00:38:31.655 [INFO][4716] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvpsb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:31.680914 containerd[1471]: 2025-09-10 00:38:31.656 [INFO][4716] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieaa939c55d3 ContainerID="b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvpsb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:31.680914 containerd[1471]: 2025-09-10 00:38:31.662 [INFO][4716] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvpsb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:31.680914 containerd[1471]: 2025-09-10 00:38:31.663 [INFO][4716] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvpsb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac5d8d6b-a420-4114-bc22-8d86a77072d3", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2", Pod:"coredns-668d6bf9bc-lvpsb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaa939c55d3", MAC:"b6:76:4a:ae:44:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:31.680914 containerd[1471]: 2025-09-10 00:38:31.676 [INFO][4716] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2" Namespace="kube-system" Pod="coredns-668d6bf9bc-lvpsb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:31.702551 containerd[1471]: time="2025-09-10T00:38:31.701925451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:31.702551 containerd[1471]: time="2025-09-10T00:38:31.701991641Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:31.702551 containerd[1471]: time="2025-09-10T00:38:31.702014336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:31.702551 containerd[1471]: time="2025-09-10T00:38:31.702187766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:31.731359 systemd[1]: Started cri-containerd-b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2.scope - libcontainer container b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2. Sep 10 00:38:31.742301 containerd[1471]: time="2025-09-10T00:38:31.742079515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:31.746258 containerd[1471]: time="2025-09-10T00:38:31.744577781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 10 00:38:31.749962 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:38:31.753601 containerd[1471]: time="2025-09-10T00:38:31.753504681Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:31.756498 containerd[1471]: time="2025-09-10T00:38:31.756447409Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:31.757500 containerd[1471]: time="2025-09-10T00:38:31.757445065Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 2.579191238s" Sep 10 00:38:31.757593 containerd[1471]: time="2025-09-10T00:38:31.757503249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 10 00:38:31.760979 containerd[1471]: time="2025-09-10T00:38:31.760394487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 00:38:31.768583 systemd-networkd[1393]: cali750057a41c7: Link UP Sep 10 00:38:31.770643 systemd-networkd[1393]: cali750057a41c7: Gained carrier Sep 10 00:38:31.770732 containerd[1471]: time="2025-09-10T00:38:31.770660050Z" level=info msg="CreateContainer within sandbox \"f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.538 [INFO][4718] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.556 [INFO][4718] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--bwmkx-eth0 goldmane-54d579b49d- calico-system 30f4ddb7-eda4-4f91-9889-caa4c6fe0752 1001 0 2025-09-10 00:38:03 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-bwmkx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali750057a41c7 [] [] }} ContainerID="dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" Namespace="calico-system" Pod="goldmane-54d579b49d-bwmkx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bwmkx-" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.556 [INFO][4718] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" Namespace="calico-system" Pod="goldmane-54d579b49d-bwmkx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.607 [INFO][4753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" HandleID="k8s-pod-network.dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.607 [INFO][4753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" HandleID="k8s-pod-network.dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004952c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-bwmkx", "timestamp":"2025-09-10 00:38:31.607358774 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.607 [INFO][4753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.652 [INFO][4753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.652 [INFO][4753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.712 [INFO][4753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" host="localhost" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.727 [INFO][4753] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.733 [INFO][4753] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.735 [INFO][4753] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.737 [INFO][4753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.737 [INFO][4753] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" host="localhost" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.739 [INFO][4753] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864 Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.745 [INFO][4753] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" host="localhost" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.752 [INFO][4753] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" host="localhost" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.752 [INFO][4753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" host="localhost" Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.752 [INFO][4753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:31.789999 containerd[1471]: 2025-09-10 00:38:31.752 [INFO][4753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" HandleID="k8s-pod-network.dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:31.790849 containerd[1471]: 2025-09-10 00:38:31.759 [INFO][4718] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" Namespace="calico-system" Pod="goldmane-54d579b49d-bwmkx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--bwmkx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"30f4ddb7-eda4-4f91-9889-caa4c6fe0752", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-bwmkx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali750057a41c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:31.790849 containerd[1471]: 2025-09-10 00:38:31.759 [INFO][4718] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" Namespace="calico-system" Pod="goldmane-54d579b49d-bwmkx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:31.790849 containerd[1471]: 2025-09-10 00:38:31.759 [INFO][4718] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali750057a41c7 ContainerID="dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" Namespace="calico-system" Pod="goldmane-54d579b49d-bwmkx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:31.790849 containerd[1471]: 2025-09-10 00:38:31.771 [INFO][4718] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" Namespace="calico-system" Pod="goldmane-54d579b49d-bwmkx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:31.790849 containerd[1471]: 2025-09-10 00:38:31.771 [INFO][4718] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" Namespace="calico-system" Pod="goldmane-54d579b49d-bwmkx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--bwmkx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"30f4ddb7-eda4-4f91-9889-caa4c6fe0752", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864", Pod:"goldmane-54d579b49d-bwmkx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali750057a41c7", MAC:"c6:21:90:14:8a:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:31.790849 containerd[1471]: 2025-09-10 00:38:31.786 [INFO][4718] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864" Namespace="calico-system" Pod="goldmane-54d579b49d-bwmkx" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:31.791884 containerd[1471]: time="2025-09-10T00:38:31.791828276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lvpsb,Uid:ac5d8d6b-a420-4114-bc22-8d86a77072d3,Namespace:kube-system,Attempt:1,} returns sandbox id \"b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2\"" Sep 10 00:38:31.792907 kubelet[2518]: E0910 00:38:31.792870 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:31.795682 containerd[1471]: time="2025-09-10T00:38:31.795644617Z" level=info msg="CreateContainer within sandbox \"b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:38:31.798279 containerd[1471]: time="2025-09-10T00:38:31.797929597Z" level=info msg="CreateContainer within sandbox \"f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a19e1a3ab0e77a6500a628d34a9e058e45a003e84a31c17a6bbc1b8b6a826c68\"" Sep 10 00:38:31.798436 containerd[1471]: time="2025-09-10T00:38:31.798382845Z" level=info msg="StartContainer for \"a19e1a3ab0e77a6500a628d34a9e058e45a003e84a31c17a6bbc1b8b6a826c68\"" Sep 10 00:38:31.821599 containerd[1471]: time="2025-09-10T00:38:31.821447017Z" level=info msg="CreateContainer within sandbox \"b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38748d1c2ae33c9aa70320837f83f08e96a2a0362734e275e7c54e19f188f9bb\"" Sep 10 00:38:31.823705 containerd[1471]: time="2025-09-10T00:38:31.823655195Z" level=info msg="StartContainer for \"38748d1c2ae33c9aa70320837f83f08e96a2a0362734e275e7c54e19f188f9bb\"" Sep 10 00:38:31.824109 containerd[1471]: time="2025-09-10T00:38:31.823497557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:31.824109 containerd[1471]: time="2025-09-10T00:38:31.823596500Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:31.824109 containerd[1471]: time="2025-09-10T00:38:31.823611631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:31.824109 containerd[1471]: time="2025-09-10T00:38:31.823820360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:31.850872 systemd[1]: Started cri-containerd-a19e1a3ab0e77a6500a628d34a9e058e45a003e84a31c17a6bbc1b8b6a826c68.scope - libcontainer container a19e1a3ab0e77a6500a628d34a9e058e45a003e84a31c17a6bbc1b8b6a826c68. Sep 10 00:38:31.856643 systemd[1]: Started cri-containerd-38748d1c2ae33c9aa70320837f83f08e96a2a0362734e275e7c54e19f188f9bb.scope - libcontainer container 38748d1c2ae33c9aa70320837f83f08e96a2a0362734e275e7c54e19f188f9bb. Sep 10 00:38:31.859656 systemd[1]: Started cri-containerd-dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864.scope - libcontainer container dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864. Sep 10 00:38:31.890097 systemd-networkd[1393]: calif1e2e0ca6a5: Link UP Sep 10 00:38:31.891154 systemd-networkd[1393]: calif1e2e0ca6a5: Gained carrier Sep 10 00:38:31.892996 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.549 [INFO][4710] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.578 [INFO][4710] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0 calico-apiserver-6c7c97f695- calico-apiserver 90f17e30-9dec-4e54-8e40-8da4f9ce8c2b 1004 0 2025-09-10 00:38:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c7c97f695 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c7c97f695-rjghd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif1e2e0ca6a5 [] [] }} ContainerID="44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-rjghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.578 [INFO][4710] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-rjghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.618 [INFO][4767] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" HandleID="k8s-pod-network.44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.618 [INFO][4767] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" HandleID="k8s-pod-network.44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c7c97f695-rjghd", "timestamp":"2025-09-10 00:38:31.618098176 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.618 [INFO][4767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.752 [INFO][4767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.753 [INFO][4767] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.813 [INFO][4767] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" host="localhost" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.826 [INFO][4767] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.839 [INFO][4767] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.841 [INFO][4767] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.849 [INFO][4767] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.849 [INFO][4767] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" host="localhost" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.851 [INFO][4767] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97 Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.858 [INFO][4767] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" host="localhost" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.867 [INFO][4767] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" host="localhost" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.867 [INFO][4767] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" host="localhost" Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.867 [INFO][4767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:31.946762 containerd[1471]: 2025-09-10 00:38:31.867 [INFO][4767] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" HandleID="k8s-pod-network.44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:31.947689 containerd[1471]: 2025-09-10 00:38:31.880 [INFO][4710] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-rjghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0", GenerateName:"calico-apiserver-6c7c97f695-", Namespace:"calico-apiserver", SelfLink:"", UID:"90f17e30-9dec-4e54-8e40-8da4f9ce8c2b", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7c97f695", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c7c97f695-rjghd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1e2e0ca6a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:31.947689 containerd[1471]: 2025-09-10 00:38:31.881 [INFO][4710] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-rjghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:31.947689 containerd[1471]: 2025-09-10 00:38:31.881 [INFO][4710] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1e2e0ca6a5 ContainerID="44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-rjghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:31.947689 containerd[1471]: 2025-09-10 00:38:31.894 [INFO][4710] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-rjghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:31.947689 containerd[1471]: 2025-09-10 00:38:31.895 [INFO][4710] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-rjghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0", GenerateName:"calico-apiserver-6c7c97f695-", Namespace:"calico-apiserver", SelfLink:"", UID:"90f17e30-9dec-4e54-8e40-8da4f9ce8c2b", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7c97f695", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97", Pod:"calico-apiserver-6c7c97f695-rjghd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1e2e0ca6a5", MAC:"de:a6:de:48:bf:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:31.947689 containerd[1471]: 2025-09-10 00:38:31.940 [INFO][4710] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97" Namespace="calico-apiserver" Pod="calico-apiserver-6c7c97f695-rjghd" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:32.022039 containerd[1471]: time="2025-09-10T00:38:32.021958872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bwmkx,Uid:30f4ddb7-eda4-4f91-9889-caa4c6fe0752,Namespace:calico-system,Attempt:1,} returns sandbox id \"dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864\"" Sep 10 00:38:32.022347 containerd[1471]: time="2025-09-10T00:38:32.022105480Z" level=info msg="StartContainer for \"a19e1a3ab0e77a6500a628d34a9e058e45a003e84a31c17a6bbc1b8b6a826c68\" returns successfully" Sep 10 00:38:32.022347 containerd[1471]: time="2025-09-10T00:38:32.022261104Z" level=info msg="StartContainer for \"38748d1c2ae33c9aa70320837f83f08e96a2a0362734e275e7c54e19f188f9bb\" returns successfully" Sep 10 00:38:32.050321 containerd[1471]: time="2025-09-10T00:38:32.049883112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:32.050321 containerd[1471]: time="2025-09-10T00:38:32.049966114Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:32.050321 containerd[1471]: time="2025-09-10T00:38:32.049980343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:32.050321 containerd[1471]: time="2025-09-10T00:38:32.050139234Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:32.074835 systemd[1]: Started cri-containerd-44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97.scope - libcontainer container 44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97. Sep 10 00:38:32.098042 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:38:32.132057 containerd[1471]: time="2025-09-10T00:38:32.131989639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c7c97f695-rjghd,Uid:90f17e30-9dec-4e54-8e40-8da4f9ce8c2b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97\"" Sep 10 00:38:32.211633 kubelet[2518]: E0910 00:38:32.211155 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:32.213439 kubelet[2518]: E0910 00:38:32.212279 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:32.781090 containerd[1471]: time="2025-09-10T00:38:32.781008042Z" level=info msg="StopPodSandbox for \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\"" Sep 10 00:38:32.866319 systemd-networkd[1393]: cali750057a41c7: Gained IPv6LL Sep 10 00:38:32.888611 kubelet[2518]: I0910 00:38:32.888531 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c6cb89779-d447p" podStartSLOduration=26.305743366 podStartE2EDuration="28.88850657s" podCreationTimestamp="2025-09-10 00:38:04 +0000 UTC" firstStartedPulling="2025-09-10 00:38:29.177340403 +0000 UTC m=+44.489555092" lastFinishedPulling="2025-09-10 00:38:31.760103606 +0000 UTC m=+47.072318296" observedRunningTime="2025-09-10 00:38:32.554946982 +0000 UTC m=+47.867161671" watchObservedRunningTime="2025-09-10 00:38:32.88850657 +0000 UTC m=+48.200721259" Sep 10 00:38:32.930393 systemd-networkd[1393]: calif1e2e0ca6a5: Gained IPv6LL Sep 10 00:38:33.117466 kubelet[2518]: I0910 00:38:33.117370 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lvpsb" podStartSLOduration=44.117346611 podStartE2EDuration="44.117346611s" podCreationTimestamp="2025-09-10 00:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:38:32.889541587 +0000 UTC m=+48.201756276" watchObservedRunningTime="2025-09-10 00:38:33.117346611 +0000 UTC m=+48.429561320" Sep 10 00:38:33.212415 kubelet[2518]: E0910 00:38:33.212348 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.118 [INFO][5079] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.118 [INFO][5079] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" iface="eth0" netns="/var/run/netns/cni-c5e1fa6b-b4dc-ed55-200d-51b8853ff48a" Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.120 [INFO][5079] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" iface="eth0" netns="/var/run/netns/cni-c5e1fa6b-b4dc-ed55-200d-51b8853ff48a" Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.120 [INFO][5079] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" iface="eth0" netns="/var/run/netns/cni-c5e1fa6b-b4dc-ed55-200d-51b8853ff48a" Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.120 [INFO][5079] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.120 [INFO][5079] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.156 [INFO][5108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" HandleID="k8s-pod-network.77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.156 [INFO][5108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.156 [INFO][5108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.223 [WARNING][5108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" HandleID="k8s-pod-network.77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.224 [INFO][5108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" HandleID="k8s-pod-network.77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.228 [INFO][5108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:33.248428 containerd[1471]: 2025-09-10 00:38:33.234 [INFO][5079] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:33.252223 containerd[1471]: time="2025-09-10T00:38:33.252100570Z" level=info msg="TearDown network for sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\" successfully" Sep 10 00:38:33.252223 containerd[1471]: time="2025-09-10T00:38:33.252179275Z" level=info msg="StopPodSandbox for \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\" returns successfully" Sep 10 00:38:33.252294 systemd[1]: run-netns-cni\x2dc5e1fa6b\x2db4dc\x2ded55\x2d200d\x2d51b8853ff48a.mount: Deactivated successfully. Sep 10 00:38:33.255421 containerd[1471]: time="2025-09-10T00:38:33.254962499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2p7zs,Uid:2025065d-06f1-4598-a92c-46630a2af417,Namespace:calico-system,Attempt:1,}" Sep 10 00:38:33.288188 kernel: bpftool[5155]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 10 00:38:33.455201 systemd-networkd[1393]: cali58ef5ea5032: Link UP Sep 10 00:38:33.455475 systemd-networkd[1393]: cali58ef5ea5032: Gained carrier Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.328 [INFO][5144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--2p7zs-eth0 csi-node-driver- calico-system 2025065d-06f1-4598-a92c-46630a2af417 1048 0 2025-09-10 00:38:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-2p7zs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali58ef5ea5032 [] [] }} ContainerID="74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" Namespace="calico-system" Pod="csi-node-driver-2p7zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p7zs-" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.329 [INFO][5144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" Namespace="calico-system" Pod="csi-node-driver-2p7zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.364 [INFO][5165] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" HandleID="k8s-pod-network.74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.364 [INFO][5165] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" HandleID="k8s-pod-network.74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-2p7zs", "timestamp":"2025-09-10 00:38:33.364480274 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.364 [INFO][5165] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.364 [INFO][5165] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.364 [INFO][5165] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.373 [INFO][5165] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" host="localhost" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.423 [INFO][5165] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.431 [INFO][5165] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.433 [INFO][5165] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.435 [INFO][5165] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.435 [INFO][5165] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" host="localhost" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.437 [INFO][5165] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0 Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.440 [INFO][5165] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" host="localhost" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.447 [INFO][5165] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" host="localhost" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.448 [INFO][5165] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" host="localhost" Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.448 [INFO][5165] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:33.474811 containerd[1471]: 2025-09-10 00:38:33.448 [INFO][5165] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" HandleID="k8s-pod-network.74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:33.475487 containerd[1471]: 2025-09-10 00:38:33.452 [INFO][5144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" Namespace="calico-system" Pod="csi-node-driver-2p7zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p7zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2p7zs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2025065d-06f1-4598-a92c-46630a2af417", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-2p7zs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali58ef5ea5032", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:33.475487 containerd[1471]: 2025-09-10 00:38:33.452 [INFO][5144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" Namespace="calico-system" Pod="csi-node-driver-2p7zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:33.475487 containerd[1471]: 2025-09-10 00:38:33.452 [INFO][5144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58ef5ea5032 ContainerID="74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" Namespace="calico-system" Pod="csi-node-driver-2p7zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:33.475487 containerd[1471]: 2025-09-10 00:38:33.455 [INFO][5144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" Namespace="calico-system" Pod="csi-node-driver-2p7zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:33.475487 containerd[1471]: 2025-09-10 00:38:33.456 [INFO][5144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" Namespace="calico-system" Pod="csi-node-driver-2p7zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p7zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2p7zs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2025065d-06f1-4598-a92c-46630a2af417", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0", Pod:"csi-node-driver-2p7zs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali58ef5ea5032", MAC:"a6:ca:ee:90:6a:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:33.475487 containerd[1471]: 2025-09-10 00:38:33.469 [INFO][5144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0" Namespace="calico-system" Pod="csi-node-driver-2p7zs" WorkloadEndpoint="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:33.497089 containerd[1471]: time="2025-09-10T00:38:33.496972630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:38:33.497089 containerd[1471]: time="2025-09-10T00:38:33.497049149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:38:33.497089 containerd[1471]: time="2025-09-10T00:38:33.497063278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:33.498845 containerd[1471]: time="2025-09-10T00:38:33.498769485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:38:33.542385 systemd[1]: Started cri-containerd-74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0.scope - libcontainer container 74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0. Sep 10 00:38:33.567416 systemd-resolved[1332]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:38:33.582844 containerd[1471]: time="2025-09-10T00:38:33.582795917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2p7zs,Uid:2025065d-06f1-4598-a92c-46630a2af417,Namespace:calico-system,Attempt:1,} returns sandbox id \"74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0\"" Sep 10 00:38:33.649712 systemd-networkd[1393]: vxlan.calico: Link UP Sep 10 00:38:33.649723 systemd-networkd[1393]: vxlan.calico: Gained carrier Sep 10 00:38:33.702654 systemd-networkd[1393]: calieaa939c55d3: Gained IPv6LL Sep 10 00:38:34.215015 kubelet[2518]: E0910 00:38:34.214977 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:35.016527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount656973097.mount: Deactivated successfully. Sep 10 00:38:35.095141 containerd[1471]: time="2025-09-10T00:38:35.095059911Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:35.096000 containerd[1471]: time="2025-09-10T00:38:35.095963555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 10 00:38:35.096995 containerd[1471]: time="2025-09-10T00:38:35.096967775Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:35.099828 containerd[1471]: time="2025-09-10T00:38:35.099788051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:35.100445 containerd[1471]: time="2025-09-10T00:38:35.100388283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.339952235s" Sep 10 00:38:35.100497 containerd[1471]: time="2025-09-10T00:38:35.100446988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 10 00:38:35.101987 containerd[1471]: time="2025-09-10T00:38:35.101952557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:38:35.103442 containerd[1471]: time="2025-09-10T00:38:35.103389001Z" level=info msg="CreateContainer within sandbox \"0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 00:38:35.106284 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Sep 10 00:38:35.118881 containerd[1471]: time="2025-09-10T00:38:35.118826918Z" level=info msg="CreateContainer within sandbox \"0088c267147d0988e5364e72ce83311470f801522292707956e5ca5d258698ba\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"44f9cb61f5de00df4be37915bc0a965309c76645212d867fc04e9913518eb1f4\"" Sep 10 00:38:35.119530 containerd[1471]: time="2025-09-10T00:38:35.119464793Z" level=info msg="StartContainer for \"44f9cb61f5de00df4be37915bc0a965309c76645212d867fc04e9913518eb1f4\"" Sep 10 00:38:35.151377 systemd[1]: Started cri-containerd-44f9cb61f5de00df4be37915bc0a965309c76645212d867fc04e9913518eb1f4.scope - libcontainer container 44f9cb61f5de00df4be37915bc0a965309c76645212d867fc04e9913518eb1f4. Sep 10 00:38:35.205049 containerd[1471]: time="2025-09-10T00:38:35.204986292Z" level=info msg="StartContainer for \"44f9cb61f5de00df4be37915bc0a965309c76645212d867fc04e9913518eb1f4\" returns successfully" Sep 10 00:38:35.234500 systemd-networkd[1393]: cali58ef5ea5032: Gained IPv6LL Sep 10 00:38:35.273452 kubelet[2518]: I0910 00:38:35.271901 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bb5dbd9c4-mldfs" podStartSLOduration=1.572200501 podStartE2EDuration="10.271851062s" podCreationTimestamp="2025-09-10 00:38:25 +0000 UTC" firstStartedPulling="2025-09-10 00:38:26.401560348 +0000 UTC m=+41.713775047" lastFinishedPulling="2025-09-10 00:38:35.101210919 +0000 UTC m=+50.413425608" observedRunningTime="2025-09-10 00:38:35.271089194 +0000 UTC m=+50.583303883" watchObservedRunningTime="2025-09-10 00:38:35.271851062 +0000 UTC m=+50.584065751" Sep 10 00:38:35.763911 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:49086.service - OpenSSH per-connection server daemon (10.0.0.1:49086). Sep 10 00:38:35.818261 sshd[5354]: Accepted publickey for core from 10.0.0.1 port 49086 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:38:35.821030 sshd[5354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:38:35.827044 systemd-logind[1448]: New session 10 of user core. Sep 10 00:38:35.832369 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 00:38:35.986182 sshd[5354]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:35.991940 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:49086.service: Deactivated successfully. Sep 10 00:38:35.996400 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:38:35.997258 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:38:35.998456 systemd-logind[1448]: Removed session 10. Sep 10 00:38:38.529271 containerd[1471]: time="2025-09-10T00:38:38.529217281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:38.530294 containerd[1471]: time="2025-09-10T00:38:38.530226977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 10 00:38:38.531445 containerd[1471]: time="2025-09-10T00:38:38.531384630Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:38.534238 containerd[1471]: time="2025-09-10T00:38:38.534189841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:38.534918 containerd[1471]: time="2025-09-10T00:38:38.534888792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.432898832s" Sep 10 00:38:38.534989 containerd[1471]: time="2025-09-10T00:38:38.534920974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 00:38:38.536138 containerd[1471]: time="2025-09-10T00:38:38.536094909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 00:38:38.537234 containerd[1471]: time="2025-09-10T00:38:38.537194931Z" level=info msg="CreateContainer within sandbox \"3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:38:38.799781 containerd[1471]: time="2025-09-10T00:38:38.799535863Z" level=info msg="CreateContainer within sandbox \"3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c4b565269206fe1deabe854a395d075e40bb495b30bce6e5d6c1f5d7cdf6e0a7\"" Sep 10 00:38:38.800457 containerd[1471]: time="2025-09-10T00:38:38.800391289Z" level=info msg="StartContainer for \"c4b565269206fe1deabe854a395d075e40bb495b30bce6e5d6c1f5d7cdf6e0a7\"" Sep 10 00:38:38.842493 systemd[1]: Started cri-containerd-c4b565269206fe1deabe854a395d075e40bb495b30bce6e5d6c1f5d7cdf6e0a7.scope - libcontainer container c4b565269206fe1deabe854a395d075e40bb495b30bce6e5d6c1f5d7cdf6e0a7. Sep 10 00:38:39.006713 containerd[1471]: time="2025-09-10T00:38:39.006643505Z" level=info msg="StartContainer for \"c4b565269206fe1deabe854a395d075e40bb495b30bce6e5d6c1f5d7cdf6e0a7\" returns successfully" Sep 10 00:38:39.189147 kubelet[2518]: E0910 00:38:39.189091 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:40.023206 kubelet[2518]: I0910 00:38:40.023112 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c7c97f695-66hjz" podStartSLOduration=30.74555463 podStartE2EDuration="39.023087736s" podCreationTimestamp="2025-09-10 00:38:01 +0000 UTC" firstStartedPulling="2025-09-10 00:38:30.258338769 +0000 UTC m=+45.570553458" lastFinishedPulling="2025-09-10 00:38:38.535871875 +0000 UTC m=+53.848086564" observedRunningTime="2025-09-10 00:38:40.021881291 +0000 UTC m=+55.334095980" watchObservedRunningTime="2025-09-10 00:38:40.023087736 +0000 UTC m=+55.335302425" Sep 10 00:38:41.010560 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:41826.service - OpenSSH per-connection server daemon (10.0.0.1:41826). Sep 10 00:38:41.060946 sshd[5432]: Accepted publickey for core from 10.0.0.1 port 41826 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:38:41.062769 sshd[5432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:38:41.068043 systemd-logind[1448]: New session 11 of user core. Sep 10 00:38:41.075307 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 00:38:41.235199 kubelet[2518]: I0910 00:38:41.235165 2518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:38:41.330445 sshd[5432]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:41.337571 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:41826.service: Deactivated successfully. Sep 10 00:38:41.340504 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:38:41.341224 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:38:41.342292 systemd-logind[1448]: Removed session 11. Sep 10 00:38:44.775377 containerd[1471]: time="2025-09-10T00:38:44.775321520Z" level=info msg="StopPodSandbox for \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\"" Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.849 [WARNING][5488] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac5d8d6b-a420-4114-bc22-8d86a77072d3", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2", Pod:"coredns-668d6bf9bc-lvpsb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaa939c55d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.850 [INFO][5488] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.850 [INFO][5488] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" iface="eth0" netns="" Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.850 [INFO][5488] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.850 [INFO][5488] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.876 [INFO][5496] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" HandleID="k8s-pod-network.f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.877 [INFO][5496] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.877 [INFO][5496] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.886 [WARNING][5496] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" HandleID="k8s-pod-network.f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.886 [INFO][5496] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" HandleID="k8s-pod-network.f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.888 [INFO][5496] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:44.896068 containerd[1471]: 2025-09-10 00:38:44.892 [INFO][5488] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:44.896984 containerd[1471]: time="2025-09-10T00:38:44.896900541Z" level=info msg="TearDown network for sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\" successfully" Sep 10 00:38:44.896984 containerd[1471]: time="2025-09-10T00:38:44.896955308Z" level=info msg="StopPodSandbox for \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\" returns successfully" Sep 10 00:38:44.908288 containerd[1471]: time="2025-09-10T00:38:44.908214759Z" level=info msg="RemovePodSandbox for \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\"" Sep 10 00:38:44.911974 containerd[1471]: time="2025-09-10T00:38:44.911908375Z" level=info msg="Forcibly stopping sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\"" Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.954 [WARNING][5514] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ac5d8d6b-a420-4114-bc22-8d86a77072d3", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b8469498ce3e7126e5d9b647d2cc524d8e9d0696584b62856e5330cc251f04e2", Pod:"coredns-668d6bf9bc-lvpsb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieaa939c55d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.954 [INFO][5514] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.954 [INFO][5514] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" iface="eth0" netns="" Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.954 [INFO][5514] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.954 [INFO][5514] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.978 [INFO][5523] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" HandleID="k8s-pod-network.f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.979 [INFO][5523] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.979 [INFO][5523] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.986 [WARNING][5523] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" HandleID="k8s-pod-network.f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.987 [INFO][5523] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" HandleID="k8s-pod-network.f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Workload="localhost-k8s-coredns--668d6bf9bc--lvpsb-eth0" Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.989 [INFO][5523] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:44.996602 containerd[1471]: 2025-09-10 00:38:44.993 [INFO][5514] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48" Sep 10 00:38:45.094360 containerd[1471]: time="2025-09-10T00:38:44.996640937Z" level=info msg="TearDown network for sandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\" successfully" Sep 10 00:38:45.112538 containerd[1471]: time="2025-09-10T00:38:45.112279594Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:38:45.112538 containerd[1471]: time="2025-09-10T00:38:45.112361342Z" level=info msg="RemovePodSandbox \"f155f9f5fb935f8b6e09b31e8f0f918622dfcad7f3eef40fe3e50279ae1f9f48\" returns successfully" Sep 10 00:38:45.116013 containerd[1471]: time="2025-09-10T00:38:45.115967974Z" level=info msg="StopPodSandbox for \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\"" Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.158 [WARNING][5547] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2p7zs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2025065d-06f1-4598-a92c-46630a2af417", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0", Pod:"csi-node-driver-2p7zs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali58ef5ea5032", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.158 [INFO][5547] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.158 [INFO][5547] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" iface="eth0" netns="" Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.158 [INFO][5547] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.158 [INFO][5547] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.182 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" HandleID="k8s-pod-network.77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.185 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.185 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.191 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" HandleID="k8s-pod-network.77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.191 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" HandleID="k8s-pod-network.77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.193 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:45.202932 containerd[1471]: 2025-09-10 00:38:45.197 [INFO][5547] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:45.203441 containerd[1471]: time="2025-09-10T00:38:45.202969430Z" level=info msg="TearDown network for sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\" successfully" Sep 10 00:38:45.203441 containerd[1471]: time="2025-09-10T00:38:45.202996962Z" level=info msg="StopPodSandbox for \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\" returns successfully" Sep 10 00:38:45.203672 containerd[1471]: time="2025-09-10T00:38:45.203623525Z" level=info msg="RemovePodSandbox for \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\"" Sep 10 00:38:45.203714 containerd[1471]: time="2025-09-10T00:38:45.203680586Z" level=info msg="Forcibly stopping sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\"" Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.251 [WARNING][5573] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--2p7zs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2025065d-06f1-4598-a92c-46630a2af417", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0", Pod:"csi-node-driver-2p7zs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali58ef5ea5032", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.251 [INFO][5573] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.251 [INFO][5573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" iface="eth0" netns="" Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.251 [INFO][5573] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.251 [INFO][5573] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.285 [INFO][5582] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" HandleID="k8s-pod-network.77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.285 [INFO][5582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.285 [INFO][5582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.292 [WARNING][5582] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" HandleID="k8s-pod-network.77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.292 [INFO][5582] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" HandleID="k8s-pod-network.77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Workload="localhost-k8s-csi--node--driver--2p7zs-eth0" Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.294 [INFO][5582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:45.303146 containerd[1471]: 2025-09-10 00:38:45.297 [INFO][5573] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4" Sep 10 00:38:45.304168 containerd[1471]: time="2025-09-10T00:38:45.303190004Z" level=info msg="TearDown network for sandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\" successfully" Sep 10 00:38:45.308269 containerd[1471]: time="2025-09-10T00:38:45.308207076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:38:45.308341 containerd[1471]: time="2025-09-10T00:38:45.308306037Z" level=info msg="RemovePodSandbox \"77a2953481bda9c755d4166de0b2debe4ff0e0f17936d12f2c0c10118185f3e4\" returns successfully" Sep 10 00:38:45.308929 containerd[1471]: time="2025-09-10T00:38:45.308896451Z" level=info msg="StopPodSandbox for \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\"" Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.357 [WARNING][5600] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0", GenerateName:"calico-apiserver-6c7c97f695-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c78a26e-1b4d-439c-a913-c9a5704cad9a", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7c97f695", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204", Pod:"calico-apiserver-6c7c97f695-66hjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e4bb2e0449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.357 [INFO][5600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.357 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" iface="eth0" netns="" Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.357 [INFO][5600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.358 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.387 [INFO][5608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" HandleID="k8s-pod-network.fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.388 [INFO][5608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.388 [INFO][5608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.394 [WARNING][5608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" HandleID="k8s-pod-network.fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.394 [INFO][5608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" HandleID="k8s-pod-network.fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.396 [INFO][5608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:45.403369 containerd[1471]: 2025-09-10 00:38:45.399 [INFO][5600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:45.428690 containerd[1471]: time="2025-09-10T00:38:45.403168590Z" level=info msg="TearDown network for sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\" successfully" Sep 10 00:38:45.428690 containerd[1471]: time="2025-09-10T00:38:45.428675465Z" level=info msg="StopPodSandbox for \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\" returns successfully" Sep 10 00:38:45.429507 containerd[1471]: time="2025-09-10T00:38:45.429458852Z" level=info msg="RemovePodSandbox for \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\"" Sep 10 00:38:45.429507 containerd[1471]: time="2025-09-10T00:38:45.429500863Z" level=info msg="Forcibly stopping sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\"" Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.478 [WARNING][5624] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0", GenerateName:"calico-apiserver-6c7c97f695-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c78a26e-1b4d-439c-a913-c9a5704cad9a", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7c97f695", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3245371a67a5e6aef5bbac26cd25eea69d03cd303f2d75f551507b51df8e4204", Pod:"calico-apiserver-6c7c97f695-66hjz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e4bb2e0449", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.479 [INFO][5624] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.479 [INFO][5624] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" iface="eth0" netns="" Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.479 [INFO][5624] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.479 [INFO][5624] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.506 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" HandleID="k8s-pod-network.fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.506 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.506 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.512 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" HandleID="k8s-pod-network.fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.512 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" HandleID="k8s-pod-network.fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Workload="localhost-k8s-calico--apiserver--6c7c97f695--66hjz-eth0" Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.514 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:45.521583 containerd[1471]: 2025-09-10 00:38:45.518 [INFO][5624] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291" Sep 10 00:38:45.522347 containerd[1471]: time="2025-09-10T00:38:45.521634784Z" level=info msg="TearDown network for sandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\" successfully" Sep 10 00:38:45.625793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2107081035.mount: Deactivated successfully. Sep 10 00:38:45.678479 containerd[1471]: time="2025-09-10T00:38:45.678212233Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:38:45.678479 containerd[1471]: time="2025-09-10T00:38:45.678311264Z" level=info msg="RemovePodSandbox \"fb9af77f990226cd87cfe3018387aec29e02706720d9f90b593464dce1b37291\" returns successfully" Sep 10 00:38:45.679461 containerd[1471]: time="2025-09-10T00:38:45.679413869Z" level=info msg="StopPodSandbox for \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\"" Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.721 [WARNING][5649] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--bwmkx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"30f4ddb7-eda4-4f91-9889-caa4c6fe0752", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864", Pod:"goldmane-54d579b49d-bwmkx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali750057a41c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.721 [INFO][5649] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.721 [INFO][5649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" iface="eth0" netns="" Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.721 [INFO][5649] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.721 [INFO][5649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.827 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" HandleID="k8s-pod-network.c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.827 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.827 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.834 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" HandleID="k8s-pod-network.c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.834 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" HandleID="k8s-pod-network.c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.836 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:45.842625 containerd[1471]: 2025-09-10 00:38:45.839 [INFO][5649] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:45.843773 containerd[1471]: time="2025-09-10T00:38:45.842706446Z" level=info msg="TearDown network for sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\" successfully" Sep 10 00:38:45.843773 containerd[1471]: time="2025-09-10T00:38:45.842747436Z" level=info msg="StopPodSandbox for \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\" returns successfully" Sep 10 00:38:45.843773 containerd[1471]: time="2025-09-10T00:38:45.843378538Z" level=info msg="RemovePodSandbox for \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\"" Sep 10 00:38:45.843773 containerd[1471]: time="2025-09-10T00:38:45.843406993Z" level=info msg="Forcibly stopping sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\"" Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.889 [WARNING][5675] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--bwmkx-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"30f4ddb7-eda4-4f91-9889-caa4c6fe0752", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864", Pod:"goldmane-54d579b49d-bwmkx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali750057a41c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.889 [INFO][5675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.889 [INFO][5675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" iface="eth0" netns="" Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.889 [INFO][5675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.889 [INFO][5675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.913 [INFO][5684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" HandleID="k8s-pod-network.c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.913 [INFO][5684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.913 [INFO][5684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.922 [WARNING][5684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" HandleID="k8s-pod-network.c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.922 [INFO][5684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" HandleID="k8s-pod-network.c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Workload="localhost-k8s-goldmane--54d579b49d--bwmkx-eth0" Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.925 [INFO][5684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:45.931701 containerd[1471]: 2025-09-10 00:38:45.928 [INFO][5675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7" Sep 10 00:38:45.931701 containerd[1471]: time="2025-09-10T00:38:45.931642938Z" level=info msg="TearDown network for sandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\" successfully" Sep 10 00:38:46.007287 containerd[1471]: time="2025-09-10T00:38:46.007198087Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:38:46.007475 containerd[1471]: time="2025-09-10T00:38:46.007313410Z" level=info msg="RemovePodSandbox \"c183be75f94d4acbdf628f57471e4fd3dea49dba79c40caff938bdedf6af54b7\" returns successfully" Sep 10 00:38:46.008635 containerd[1471]: time="2025-09-10T00:38:46.008592154Z" level=info msg="StopPodSandbox for \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\"" Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.060 [WARNING][5704] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0", GenerateName:"calico-kube-controllers-c6cb89779-", Namespace:"calico-system", SelfLink:"", UID:"3bc944ca-86e6-4492-bde5-0808ef5e617e", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6cb89779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d", Pod:"calico-kube-controllers-c6cb89779-d447p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ec0679a738", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.061 [INFO][5704] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.061 [INFO][5704] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" iface="eth0" netns="" Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.061 [INFO][5704] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.061 [INFO][5704] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.092 [INFO][5715] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" HandleID="k8s-pod-network.0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.093 [INFO][5715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.093 [INFO][5715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.099 [WARNING][5715] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" HandleID="k8s-pod-network.0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.100 [INFO][5715] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" HandleID="k8s-pod-network.0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.101 [INFO][5715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:46.107702 containerd[1471]: 2025-09-10 00:38:46.104 [INFO][5704] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:46.107702 containerd[1471]: time="2025-09-10T00:38:46.107691582Z" level=info msg="TearDown network for sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\" successfully" Sep 10 00:38:46.108766 containerd[1471]: time="2025-09-10T00:38:46.107726591Z" level=info msg="StopPodSandbox for \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\" returns successfully" Sep 10 00:38:46.108766 containerd[1471]: time="2025-09-10T00:38:46.108419019Z" level=info msg="RemovePodSandbox for \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\"" Sep 10 00:38:46.108766 containerd[1471]: time="2025-09-10T00:38:46.108460841Z" level=info msg="Forcibly stopping sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\"" Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.187 [WARNING][5733] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0", GenerateName:"calico-kube-controllers-c6cb89779-", Namespace:"calico-system", SelfLink:"", UID:"3bc944ca-86e6-4492-bde5-0808ef5e617e", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c6cb89779", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f99cb38dee9f8c3a74cf6d96f4d666929aac1b20a8189f3f4e43c8478163f95d", Pod:"calico-kube-controllers-c6cb89779-d447p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ec0679a738", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.187 [INFO][5733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.187 [INFO][5733] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" iface="eth0" netns="" Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.187 [INFO][5733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.187 [INFO][5733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.242 [INFO][5743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" HandleID="k8s-pod-network.0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.242 [INFO][5743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.242 [INFO][5743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.275 [WARNING][5743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" HandleID="k8s-pod-network.0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.275 [INFO][5743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" HandleID="k8s-pod-network.0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Workload="localhost-k8s-calico--kube--controllers--c6cb89779--d447p-eth0" Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.278 [INFO][5743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:46.288237 containerd[1471]: 2025-09-10 00:38:46.284 [INFO][5733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4" Sep 10 00:38:46.290977 containerd[1471]: time="2025-09-10T00:38:46.290287497Z" level=info msg="TearDown network for sandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\" successfully" Sep 10 00:38:46.348509 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:41836.service - OpenSSH per-connection server daemon (10.0.0.1:41836). Sep 10 00:38:46.409279 sshd[5751]: Accepted publickey for core from 10.0.0.1 port 41836 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:38:46.411930 sshd[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:38:46.419098 systemd-logind[1448]: New session 12 of user core. Sep 10 00:38:46.424332 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 00:38:47.338307 containerd[1471]: time="2025-09-10T00:38:47.338229165Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:38:47.339568 containerd[1471]: time="2025-09-10T00:38:47.338340881Z" level=info msg="RemovePodSandbox \"0486f91fa2ba216e4156727df3839b94d94d9e22d01bcfe3e438e5eb5bdb7ea4\" returns successfully" Sep 10 00:38:47.339568 containerd[1471]: time="2025-09-10T00:38:47.339511845Z" level=info msg="StopPodSandbox for \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\"" Sep 10 00:38:47.529199 sshd[5751]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.405 [WARNING][5775] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c002682-d6bc-423d-b99d-e4ec01f48f3d", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c", Pod:"coredns-668d6bf9bc-kjzlx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ef438801d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.406 [INFO][5775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.406 [INFO][5775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" iface="eth0" netns="" Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.406 [INFO][5775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.406 [INFO][5775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.427 [INFO][5784] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" HandleID="k8s-pod-network.d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.427 [INFO][5784] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.427 [INFO][5784] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.519 [WARNING][5784] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" HandleID="k8s-pod-network.d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.519 [INFO][5784] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" HandleID="k8s-pod-network.d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.524 [INFO][5784] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:47.540318 containerd[1471]: 2025-09-10 00:38:47.536 [INFO][5775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:47.541193 containerd[1471]: time="2025-09-10T00:38:47.540368167Z" level=info msg="TearDown network for sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\" successfully" Sep 10 00:38:47.541334 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:41836.service: Deactivated successfully. Sep 10 00:38:47.543548 containerd[1471]: time="2025-09-10T00:38:47.540404196Z" level=info msg="StopPodSandbox for \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\" returns successfully" Sep 10 00:38:47.543548 containerd[1471]: time="2025-09-10T00:38:47.542408411Z" level=info msg="RemovePodSandbox for \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\"" Sep 10 00:38:47.543548 containerd[1471]: time="2025-09-10T00:38:47.542452946Z" level=info msg="Forcibly stopping sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\"" Sep 10 00:38:47.545388 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:38:47.546526 containerd[1471]: time="2025-09-10T00:38:47.546380628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:47.548860 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:38:47.551984 containerd[1471]: time="2025-09-10T00:38:47.551890750Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 10 00:38:47.554478 containerd[1471]: time="2025-09-10T00:38:47.554437152Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:47.555516 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:41842.service - OpenSSH per-connection server daemon (10.0.0.1:41842). Sep 10 00:38:47.558507 systemd-logind[1448]: Removed session 12. Sep 10 00:38:47.564764 containerd[1471]: time="2025-09-10T00:38:47.564703418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:47.572423 containerd[1471]: time="2025-09-10T00:38:47.572346282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 9.034205408s" Sep 10 00:38:47.572595 containerd[1471]: time="2025-09-10T00:38:47.572431056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 10 00:38:47.579780 containerd[1471]: time="2025-09-10T00:38:47.579736137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:38:47.582259 containerd[1471]: time="2025-09-10T00:38:47.582135967Z" level=info msg="CreateContainer within sandbox \"dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 00:38:47.592066 sshd[5809]: Accepted publickey for core from 10.0.0.1 port 41842 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:38:47.595564 sshd[5809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:38:47.614853 systemd-logind[1448]: New session 13 of user core. Sep 10 00:38:47.619775 containerd[1471]: time="2025-09-10T00:38:47.617594681Z" level=info msg="CreateContainer within sandbox \"dbe2c80b95cc8241aec2684b92af4226d44b7b546050cbdaef71864aa0d78864\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6cb1c0005f28496684ab3713e8b273767e8ee8afb82e9f966ac76fca0cf31120\"" Sep 10 00:38:47.619775 containerd[1471]: time="2025-09-10T00:38:47.618610646Z" level=info msg="StartContainer for \"6cb1c0005f28496684ab3713e8b273767e8ee8afb82e9f966ac76fca0cf31120\"" Sep 10 00:38:47.622416 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.604 [WARNING][5810] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c002682-d6bc-423d-b99d-e4ec01f48f3d", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 37, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fa511bd9e0c87e0c03946b9ca313c3837a4ebf830c2a96beec2f5be834c1754c", Pod:"coredns-668d6bf9bc-kjzlx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ef438801d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.605 [INFO][5810] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.605 [INFO][5810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" iface="eth0" netns="" Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.605 [INFO][5810] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.605 [INFO][5810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.653 [INFO][5820] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" HandleID="k8s-pod-network.d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.654 [INFO][5820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.654 [INFO][5820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.660 [WARNING][5820] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" HandleID="k8s-pod-network.d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.660 [INFO][5820] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" HandleID="k8s-pod-network.d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Workload="localhost-k8s-coredns--668d6bf9bc--kjzlx-eth0" Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.697 [INFO][5820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:47.708987 containerd[1471]: 2025-09-10 00:38:47.705 [INFO][5810] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f" Sep 10 00:38:47.709715 containerd[1471]: time="2025-09-10T00:38:47.709040063Z" level=info msg="TearDown network for sandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\" successfully" Sep 10 00:38:47.781358 systemd[1]: Started cri-containerd-6cb1c0005f28496684ab3713e8b273767e8ee8afb82e9f966ac76fca0cf31120.scope - libcontainer container 6cb1c0005f28496684ab3713e8b273767e8ee8afb82e9f966ac76fca0cf31120. Sep 10 00:38:47.835198 containerd[1471]: time="2025-09-10T00:38:47.835007829Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:38:47.835198 containerd[1471]: time="2025-09-10T00:38:47.835106409Z" level=info msg="RemovePodSandbox \"d071a97c69ba059eebf9810c8f800a92bf4c75a8564885a072e16b018504406f\" returns successfully" Sep 10 00:38:47.836451 containerd[1471]: time="2025-09-10T00:38:47.836072998Z" level=info msg="StopPodSandbox for \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\"" Sep 10 00:38:47.854321 containerd[1471]: time="2025-09-10T00:38:47.853189876Z" level=info msg="StartContainer for \"6cb1c0005f28496684ab3713e8b273767e8ee8afb82e9f966ac76fca0cf31120\" returns successfully" Sep 10 00:38:47.893624 sshd[5809]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:47.907705 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:41842.service: Deactivated successfully. Sep 10 00:38:47.916495 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:38:47.926429 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:38:47.935608 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:41844.service - OpenSSH per-connection server daemon (10.0.0.1:41844). Sep 10 00:38:47.937472 systemd-logind[1448]: Removed session 13. Sep 10 00:38:47.979284 containerd[1471]: time="2025-09-10T00:38:47.979240051Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:47.985145 containerd[1471]: time="2025-09-10T00:38:47.983391096Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 00:38:47.985145 containerd[1471]: time="2025-09-10T00:38:47.984492715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 403.730283ms" Sep 10 00:38:47.985145 containerd[1471]: time="2025-09-10T00:38:47.984520549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 00:38:47.986237 sshd[5892]: Accepted publickey for core from 10.0.0.1 port 41844 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:38:47.986803 containerd[1471]: time="2025-09-10T00:38:47.986774356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 00:38:47.990236 containerd[1471]: time="2025-09-10T00:38:47.990108932Z" level=info msg="CreateContainer within sandbox \"44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:38:47.992147 sshd[5892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:38:48.002365 systemd-logind[1448]: New session 14 of user core. Sep 10 00:38:48.007968 containerd[1471]: time="2025-09-10T00:38:48.007345662Z" level=info msg="CreateContainer within sandbox \"44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"84d1a4671d692b99d84f4090f74a21a96538ff84d66d29baa92a9eb9b4a25e63\"" Sep 10 00:38:48.010254 containerd[1471]: time="2025-09-10T00:38:48.008855279Z" level=info msg="StartContainer for \"84d1a4671d692b99d84f4090f74a21a96538ff84d66d29baa92a9eb9b4a25e63\"" Sep 10 00:38:48.013335 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:47.957 [WARNING][5881] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0", GenerateName:"calico-apiserver-6c7c97f695-", Namespace:"calico-apiserver", SelfLink:"", UID:"90f17e30-9dec-4e54-8e40-8da4f9ce8c2b", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7c97f695", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97", Pod:"calico-apiserver-6c7c97f695-rjghd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1e2e0ca6a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:47.958 [INFO][5881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:47.958 [INFO][5881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" iface="eth0" netns="" Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:47.958 [INFO][5881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:47.958 [INFO][5881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:48.000 [INFO][5896] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" HandleID="k8s-pod-network.03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:48.001 [INFO][5896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:48.001 [INFO][5896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:48.010 [WARNING][5896] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" HandleID="k8s-pod-network.03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:48.010 [INFO][5896] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" HandleID="k8s-pod-network.03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:48.012 [INFO][5896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:48.024970 containerd[1471]: 2025-09-10 00:38:48.020 [INFO][5881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:48.025713 containerd[1471]: time="2025-09-10T00:38:48.025010888Z" level=info msg="TearDown network for sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\" successfully" Sep 10 00:38:48.025713 containerd[1471]: time="2025-09-10T00:38:48.025058581Z" level=info msg="StopPodSandbox for \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\" returns successfully" Sep 10 00:38:48.025713 containerd[1471]: time="2025-09-10T00:38:48.025577944Z" level=info msg="RemovePodSandbox for \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\"" Sep 10 00:38:48.025713 containerd[1471]: time="2025-09-10T00:38:48.025605708Z" level=info msg="Forcibly stopping sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\"" Sep 10 00:38:48.047471 systemd[1]: Started cri-containerd-84d1a4671d692b99d84f4090f74a21a96538ff84d66d29baa92a9eb9b4a25e63.scope - libcontainer container 84d1a4671d692b99d84f4090f74a21a96538ff84d66d29baa92a9eb9b4a25e63. Sep 10 00:38:48.130195 containerd[1471]: time="2025-09-10T00:38:48.129399817Z" level=info msg="StartContainer for \"84d1a4671d692b99d84f4090f74a21a96538ff84d66d29baa92a9eb9b4a25e63\" returns successfully" Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.076 [WARNING][5929] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0", GenerateName:"calico-apiserver-6c7c97f695-", Namespace:"calico-apiserver", SelfLink:"", UID:"90f17e30-9dec-4e54-8e40-8da4f9ce8c2b", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 38, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c7c97f695", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"44c98889262892e0e50b70952d47792e22ce845e16882575e27e08e7e8313f97", Pod:"calico-apiserver-6c7c97f695-rjghd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1e2e0ca6a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.078 [INFO][5929] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.078 [INFO][5929] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" iface="eth0" netns="" Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.078 [INFO][5929] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.078 [INFO][5929] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.122 [INFO][5954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" HandleID="k8s-pod-network.03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.122 [INFO][5954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.122 [INFO][5954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.132 [WARNING][5954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" HandleID="k8s-pod-network.03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.132 [INFO][5954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" HandleID="k8s-pod-network.03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Workload="localhost-k8s-calico--apiserver--6c7c97f695--rjghd-eth0" Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.135 [INFO][5954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:48.142507 containerd[1471]: 2025-09-10 00:38:48.139 [INFO][5929] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf" Sep 10 00:38:48.143081 containerd[1471]: time="2025-09-10T00:38:48.142580308Z" level=info msg="TearDown network for sandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\" successfully" Sep 10 00:38:48.149111 containerd[1471]: time="2025-09-10T00:38:48.149063884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:38:48.149206 containerd[1471]: time="2025-09-10T00:38:48.149186561Z" level=info msg="RemovePodSandbox \"03e42c5284242b17dc533a9f9f45e984ff4b581ff968e4e59062291d285f3bbf\" returns successfully" Sep 10 00:38:48.151156 containerd[1471]: time="2025-09-10T00:38:48.150356050Z" level=info msg="StopPodSandbox for \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\"" Sep 10 00:38:48.194340 sshd[5892]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:48.202397 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:41844.service: Deactivated successfully. Sep 10 00:38:48.205582 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:38:48.206391 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:38:48.208171 systemd-logind[1448]: Removed session 14. Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.200 [WARNING][5988] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" WorkloadEndpoint="localhost-k8s-whisker--678754df47--sfdv9-eth0" Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.200 [INFO][5988] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.200 [INFO][5988] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" iface="eth0" netns="" Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.200 [INFO][5988] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.200 [INFO][5988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.228 [INFO][6000] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" HandleID="k8s-pod-network.2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Workload="localhost-k8s-whisker--678754df47--sfdv9-eth0" Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.228 [INFO][6000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.228 [INFO][6000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.237 [WARNING][6000] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" HandleID="k8s-pod-network.2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Workload="localhost-k8s-whisker--678754df47--sfdv9-eth0" Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.237 [INFO][6000] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" HandleID="k8s-pod-network.2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Workload="localhost-k8s-whisker--678754df47--sfdv9-eth0" Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.239 [INFO][6000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:48.246439 containerd[1471]: 2025-09-10 00:38:48.242 [INFO][5988] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:48.246974 containerd[1471]: time="2025-09-10T00:38:48.246473189Z" level=info msg="TearDown network for sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\" successfully" Sep 10 00:38:48.246974 containerd[1471]: time="2025-09-10T00:38:48.246507815Z" level=info msg="StopPodSandbox for \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\" returns successfully" Sep 10 00:38:48.247246 containerd[1471]: time="2025-09-10T00:38:48.247209261Z" level=info msg="RemovePodSandbox for \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\"" Sep 10 00:38:48.247298 containerd[1471]: time="2025-09-10T00:38:48.247250771Z" level=info msg="Forcibly stopping sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\"" Sep 10 00:38:48.288334 kubelet[2518]: I0910 00:38:48.288095 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c7c97f695-rjghd" podStartSLOduration=31.43682702 podStartE2EDuration="47.288041791s" podCreationTimestamp="2025-09-10 00:38:01 +0000 UTC" firstStartedPulling="2025-09-10 00:38:32.134611312 +0000 UTC m=+47.446826001" lastFinishedPulling="2025-09-10 00:38:47.985826083 +0000 UTC m=+63.298040772" observedRunningTime="2025-09-10 00:38:48.286002691 +0000 UTC m=+63.598217410" watchObservedRunningTime="2025-09-10 00:38:48.288041791 +0000 UTC m=+63.600256490" Sep 10 00:38:48.311547 kubelet[2518]: I0910 00:38:48.311437 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-bwmkx" podStartSLOduration=29.763313809 podStartE2EDuration="45.311413871s" podCreationTimestamp="2025-09-10 00:38:03 +0000 UTC" firstStartedPulling="2025-09-10 00:38:32.029399853 +0000 UTC m=+47.341614542" lastFinishedPulling="2025-09-10 00:38:47.577499915 +0000 UTC m=+62.889714604" observedRunningTime="2025-09-10 00:38:48.310996956 +0000 UTC m=+63.623211635" watchObservedRunningTime="2025-09-10 00:38:48.311413871 +0000 UTC m=+63.623628560" Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.317 [WARNING][6019] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" WorkloadEndpoint="localhost-k8s-whisker--678754df47--sfdv9-eth0" Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.317 [INFO][6019] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.317 [INFO][6019] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" iface="eth0" netns="" Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.317 [INFO][6019] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.317 [INFO][6019] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.352 [INFO][6046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" HandleID="k8s-pod-network.2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Workload="localhost-k8s-whisker--678754df47--sfdv9-eth0" Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.353 [INFO][6046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.353 [INFO][6046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.362 [WARNING][6046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" HandleID="k8s-pod-network.2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Workload="localhost-k8s-whisker--678754df47--sfdv9-eth0" Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.362 [INFO][6046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" HandleID="k8s-pod-network.2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Workload="localhost-k8s-whisker--678754df47--sfdv9-eth0" Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.364 [INFO][6046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:38:48.375103 containerd[1471]: 2025-09-10 00:38:48.370 [INFO][6019] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858" Sep 10 00:38:48.375966 containerd[1471]: time="2025-09-10T00:38:48.375175446Z" level=info msg="TearDown network for sandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\" successfully" Sep 10 00:38:48.565170 containerd[1471]: time="2025-09-10T00:38:48.564912319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:38:48.565170 containerd[1471]: time="2025-09-10T00:38:48.565035015Z" level=info msg="RemovePodSandbox \"2f3ee7947b55b272612ba656e5933e0e60f9b120f3113474d048f02f92e71858\" returns successfully" Sep 10 00:38:49.282793 kubelet[2518]: I0910 00:38:49.282705 2518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:38:52.780876 kubelet[2518]: E0910 00:38:52.780807 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:53.069132 containerd[1471]: time="2025-09-10T00:38:53.068902951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:53.115440 containerd[1471]: time="2025-09-10T00:38:53.115294078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 10 00:38:53.119199 containerd[1471]: time="2025-09-10T00:38:53.119164502Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:53.122952 containerd[1471]: time="2025-09-10T00:38:53.122894266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:53.124384 containerd[1471]: time="2025-09-10T00:38:53.124110518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 5.137301715s" Sep 10 00:38:53.124384 containerd[1471]: time="2025-09-10T00:38:53.124170754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 10 00:38:53.128152 containerd[1471]: time="2025-09-10T00:38:53.128081356Z" level=info msg="CreateContainer within sandbox \"74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 00:38:53.172560 containerd[1471]: time="2025-09-10T00:38:53.172484274Z" level=info msg="CreateContainer within sandbox \"74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2d1fef3f9ac55cd1b142b54fead4ed900a48cfc0b85a35d3090292c244fb6bd7\"" Sep 10 00:38:53.174060 containerd[1471]: time="2025-09-10T00:38:53.173891594Z" level=info msg="StartContainer for \"2d1fef3f9ac55cd1b142b54fead4ed900a48cfc0b85a35d3090292c244fb6bd7\"" Sep 10 00:38:53.222439 systemd[1]: Started cri-containerd-2d1fef3f9ac55cd1b142b54fead4ed900a48cfc0b85a35d3090292c244fb6bd7.scope - libcontainer container 2d1fef3f9ac55cd1b142b54fead4ed900a48cfc0b85a35d3090292c244fb6bd7. Sep 10 00:38:53.225859 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:56730.service - OpenSSH per-connection server daemon (10.0.0.1:56730). Sep 10 00:38:53.301915 sshd[6155]: Accepted publickey for core from 10.0.0.1 port 56730 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:38:53.304232 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:38:53.310401 systemd-logind[1448]: New session 15 of user core. Sep 10 00:38:53.319387 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 00:38:53.450834 containerd[1471]: time="2025-09-10T00:38:53.450699744Z" level=info msg="StartContainer for \"2d1fef3f9ac55cd1b142b54fead4ed900a48cfc0b85a35d3090292c244fb6bd7\" returns successfully" Sep 10 00:38:53.454745 containerd[1471]: time="2025-09-10T00:38:53.454699267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 00:38:53.620190 sshd[6155]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:53.624949 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:56730.service: Deactivated successfully. Sep 10 00:38:53.627889 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:38:53.628642 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:38:53.630051 systemd-logind[1448]: Removed session 15. Sep 10 00:38:54.784195 kubelet[2518]: E0910 00:38:54.784151 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:55.824267 containerd[1471]: time="2025-09-10T00:38:55.824104412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:55.826028 containerd[1471]: time="2025-09-10T00:38:55.825970082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 10 00:38:55.828565 containerd[1471]: time="2025-09-10T00:38:55.828479981Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:55.831538 containerd[1471]: time="2025-09-10T00:38:55.831497506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:38:55.832566 containerd[1471]: time="2025-09-10T00:38:55.832504263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.377756923s" Sep 10 00:38:55.832566 containerd[1471]: time="2025-09-10T00:38:55.832563277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 10 00:38:55.835475 containerd[1471]: time="2025-09-10T00:38:55.835435282Z" level=info msg="CreateContainer within sandbox \"74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 00:38:55.854524 containerd[1471]: time="2025-09-10T00:38:55.854452318Z" level=info msg="CreateContainer within sandbox \"74424106e13eb73b2004e0c8b7ef3076e6d5bf0b6f8547eff744c1fd48599fd0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8285a771cf75a9669d29d66a0169748e9f83056ce9c0069141e539459e836e2e\"" Sep 10 00:38:55.855329 containerd[1471]: time="2025-09-10T00:38:55.855275902Z" level=info msg="StartContainer for \"8285a771cf75a9669d29d66a0169748e9f83056ce9c0069141e539459e836e2e\"" Sep 10 00:38:55.897403 systemd[1]: Started cri-containerd-8285a771cf75a9669d29d66a0169748e9f83056ce9c0069141e539459e836e2e.scope - libcontainer container 8285a771cf75a9669d29d66a0169748e9f83056ce9c0069141e539459e836e2e. Sep 10 00:38:55.935283 containerd[1471]: time="2025-09-10T00:38:55.935226597Z" level=info msg="StartContainer for \"8285a771cf75a9669d29d66a0169748e9f83056ce9c0069141e539459e836e2e\" returns successfully" Sep 10 00:38:56.476790 kubelet[2518]: I0910 00:38:56.476708 2518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2p7zs" podStartSLOduration=30.229368503 podStartE2EDuration="52.476687531s" podCreationTimestamp="2025-09-10 00:38:04 +0000 UTC" firstStartedPulling="2025-09-10 00:38:33.586241297 +0000 UTC m=+48.898455986" lastFinishedPulling="2025-09-10 00:38:55.833560325 +0000 UTC m=+71.145775014" observedRunningTime="2025-09-10 00:38:56.475263893 +0000 UTC m=+71.787478582" watchObservedRunningTime="2025-09-10 00:38:56.476687531 +0000 UTC m=+71.788902220" Sep 10 00:38:56.780652 kubelet[2518]: E0910 00:38:56.780538 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:38:57.049483 kubelet[2518]: I0910 00:38:57.049335 2518 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 00:38:57.054713 kubelet[2518]: I0910 00:38:57.054670 2518 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 00:38:58.632072 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:56746.service - OpenSSH per-connection server daemon (10.0.0.1:56746). Sep 10 00:38:58.684636 sshd[6241]: Accepted publickey for core from 10.0.0.1 port 56746 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:38:58.686472 sshd[6241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:38:58.690855 systemd-logind[1448]: New session 16 of user core. Sep 10 00:38:58.698327 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 00:38:58.997813 sshd[6241]: pam_unix(sshd:session): session closed for user core Sep 10 00:38:59.002882 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:56746.service: Deactivated successfully. Sep 10 00:38:59.005501 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:38:59.006784 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:38:59.007748 systemd-logind[1448]: Removed session 16. Sep 10 00:39:04.011110 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:36646.service - OpenSSH per-connection server daemon (10.0.0.1:36646). Sep 10 00:39:04.066463 sshd[6276]: Accepted publickey for core from 10.0.0.1 port 36646 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:04.068796 sshd[6276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:04.073864 systemd-logind[1448]: New session 17 of user core. Sep 10 00:39:04.079406 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 00:39:04.381561 sshd[6276]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:04.388610 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:36646.service: Deactivated successfully. Sep 10 00:39:04.391991 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:39:04.392842 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:39:04.393858 systemd-logind[1448]: Removed session 17. Sep 10 00:39:06.405010 kubelet[2518]: I0910 00:39:06.404501 2518 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:39:09.406190 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:36656.service - OpenSSH per-connection server daemon (10.0.0.1:36656). Sep 10 00:39:09.473414 sshd[6293]: Accepted publickey for core from 10.0.0.1 port 36656 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:09.475750 sshd[6293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:09.481365 systemd-logind[1448]: New session 18 of user core. Sep 10 00:39:09.485271 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 00:39:09.881476 sshd[6293]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:09.885617 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:36656.service: Deactivated successfully. Sep 10 00:39:09.888028 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:39:09.888735 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:39:09.889623 systemd-logind[1448]: Removed session 18. Sep 10 00:39:14.894257 systemd[1]: Started sshd@18-10.0.0.67:22-10.0.0.1:55220.service - OpenSSH per-connection server daemon (10.0.0.1:55220). Sep 10 00:39:14.962924 sshd[6315]: Accepted publickey for core from 10.0.0.1 port 55220 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:14.964790 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:14.969370 systemd-logind[1448]: New session 19 of user core. Sep 10 00:39:14.976259 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:39:15.170481 sshd[6315]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:15.188873 systemd[1]: sshd@18-10.0.0.67:22-10.0.0.1:55220.service: Deactivated successfully. Sep 10 00:39:15.191891 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:39:15.194141 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:39:15.201669 systemd[1]: Started sshd@19-10.0.0.67:22-10.0.0.1:55234.service - OpenSSH per-connection server daemon (10.0.0.1:55234). Sep 10 00:39:15.202802 systemd-logind[1448]: Removed session 19. Sep 10 00:39:15.255336 sshd[6329]: Accepted publickey for core from 10.0.0.1 port 55234 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:15.257612 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:15.263431 systemd-logind[1448]: New session 20 of user core. Sep 10 00:39:15.277477 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 00:39:16.103890 sshd[6329]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:16.113878 systemd[1]: sshd@19-10.0.0.67:22-10.0.0.1:55234.service: Deactivated successfully. Sep 10 00:39:16.116113 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:39:16.118084 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:39:16.125439 systemd[1]: Started sshd@20-10.0.0.67:22-10.0.0.1:55242.service - OpenSSH per-connection server daemon (10.0.0.1:55242). Sep 10 00:39:16.126901 systemd-logind[1448]: Removed session 20. Sep 10 00:39:16.167754 sshd[6341]: Accepted publickey for core from 10.0.0.1 port 55242 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:16.169510 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:16.173983 systemd-logind[1448]: New session 21 of user core. Sep 10 00:39:16.183275 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 00:39:17.729608 sshd[6341]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:17.738402 systemd[1]: sshd@20-10.0.0.67:22-10.0.0.1:55242.service: Deactivated successfully. Sep 10 00:39:17.741345 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:39:17.746052 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:39:17.756430 systemd[1]: Started sshd@21-10.0.0.67:22-10.0.0.1:55248.service - OpenSSH per-connection server daemon (10.0.0.1:55248). Sep 10 00:39:17.759220 systemd-logind[1448]: Removed session 21. Sep 10 00:39:17.780572 kubelet[2518]: E0910 00:39:17.780068 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:39:17.807249 sshd[6381]: Accepted publickey for core from 10.0.0.1 port 55248 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:17.809663 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:17.814676 systemd-logind[1448]: New session 22 of user core. Sep 10 00:39:17.823468 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 00:39:18.502687 sshd[6381]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:18.514515 systemd[1]: sshd@21-10.0.0.67:22-10.0.0.1:55248.service: Deactivated successfully. Sep 10 00:39:18.516710 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:39:18.518509 systemd-logind[1448]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:39:18.525547 systemd[1]: Started sshd@22-10.0.0.67:22-10.0.0.1:55260.service - OpenSSH per-connection server daemon (10.0.0.1:55260). Sep 10 00:39:18.526523 systemd-logind[1448]: Removed session 22. Sep 10 00:39:18.566900 sshd[6394]: Accepted publickey for core from 10.0.0.1 port 55260 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:18.569012 sshd[6394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:18.574372 systemd-logind[1448]: New session 23 of user core. Sep 10 00:39:18.585513 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 00:39:18.721599 sshd[6394]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:18.726907 systemd[1]: sshd@22-10.0.0.67:22-10.0.0.1:55260.service: Deactivated successfully. Sep 10 00:39:18.728996 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:39:18.729669 systemd-logind[1448]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:39:18.730553 systemd-logind[1448]: Removed session 23. Sep 10 00:39:23.735030 systemd[1]: Started sshd@23-10.0.0.67:22-10.0.0.1:44402.service - OpenSSH per-connection server daemon (10.0.0.1:44402). Sep 10 00:39:23.784469 sshd[6475]: Accepted publickey for core from 10.0.0.1 port 44402 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:23.786612 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:23.791666 systemd-logind[1448]: New session 24 of user core. Sep 10 00:39:23.802321 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 00:39:24.071154 sshd[6475]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:24.075598 systemd[1]: sshd@23-10.0.0.67:22-10.0.0.1:44402.service: Deactivated successfully. Sep 10 00:39:24.078625 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:39:24.079481 systemd-logind[1448]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:39:24.080587 systemd-logind[1448]: Removed session 24. Sep 10 00:39:29.083452 systemd[1]: Started sshd@24-10.0.0.67:22-10.0.0.1:44404.service - OpenSSH per-connection server daemon (10.0.0.1:44404). Sep 10 00:39:29.151415 sshd[6492]: Accepted publickey for core from 10.0.0.1 port 44404 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:29.153534 sshd[6492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:29.158200 systemd-logind[1448]: New session 25 of user core. Sep 10 00:39:29.163270 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 00:39:29.661856 sshd[6492]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:29.666550 systemd[1]: sshd@24-10.0.0.67:22-10.0.0.1:44404.service: Deactivated successfully. Sep 10 00:39:29.669673 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:39:29.672616 systemd-logind[1448]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:39:29.674808 systemd-logind[1448]: Removed session 25. Sep 10 00:39:34.674548 systemd[1]: Started sshd@25-10.0.0.67:22-10.0.0.1:53462.service - OpenSSH per-connection server daemon (10.0.0.1:53462). Sep 10 00:39:34.715037 sshd[6527]: Accepted publickey for core from 10.0.0.1 port 53462 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:34.717050 sshd[6527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:34.721305 systemd-logind[1448]: New session 26 of user core. Sep 10 00:39:34.732436 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 00:39:34.901906 sshd[6527]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:34.906508 systemd[1]: sshd@25-10.0.0.67:22-10.0.0.1:53462.service: Deactivated successfully. Sep 10 00:39:34.908941 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:39:34.910136 systemd-logind[1448]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:39:34.911717 systemd-logind[1448]: Removed session 26. Sep 10 00:39:39.921517 systemd[1]: Started sshd@26-10.0.0.67:22-10.0.0.1:54268.service - OpenSSH per-connection server daemon (10.0.0.1:54268). Sep 10 00:39:39.969802 sshd[6541]: Accepted publickey for core from 10.0.0.1 port 54268 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:39.972192 sshd[6541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:39.977461 systemd-logind[1448]: New session 27 of user core. Sep 10 00:39:39.987457 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 10 00:39:40.170469 sshd[6541]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:40.175435 systemd[1]: sshd@26-10.0.0.67:22-10.0.0.1:54268.service: Deactivated successfully. Sep 10 00:39:40.178016 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 00:39:40.178945 systemd-logind[1448]: Session 27 logged out. Waiting for processes to exit. Sep 10 00:39:40.179932 systemd-logind[1448]: Removed session 27. Sep 10 00:39:41.780522 kubelet[2518]: E0910 00:39:41.780453 2518 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"