Sep 5 00:04:50.908328 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Sep 4 22:33:49 -00 2025 Sep 5 00:04:50.908371 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:04:50.908387 kernel: BIOS-provided physical RAM map: Sep 5 00:04:50.908395 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 5 00:04:50.908410 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 5 00:04:50.908418 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 5 00:04:50.908428 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 5 00:04:50.908437 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 5 00:04:50.908446 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 5 00:04:50.908454 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 5 00:04:50.908467 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 5 00:04:50.908475 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 5 00:04:50.908490 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 5 00:04:50.908499 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 5 00:04:50.908513 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 5 00:04:50.908522 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 5 00:04:50.908535 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 5 00:04:50.908544 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 5 00:04:50.908553 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 5 00:04:50.908561 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 5 00:04:50.908570 kernel: NX (Execute Disable) protection: active Sep 5 00:04:50.908579 kernel: APIC: Static calls initialized Sep 5 00:04:50.908588 kernel: efi: EFI v2.7 by EDK II Sep 5 00:04:50.908598 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 5 00:04:50.908606 kernel: SMBIOS 2.8 present. Sep 5 00:04:50.908613 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 5 00:04:50.908620 kernel: Hypervisor detected: KVM Sep 5 00:04:50.908630 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 5 00:04:50.908637 kernel: kvm-clock: using sched offset of 6480663642 cycles Sep 5 00:04:50.908644 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 5 00:04:50.908651 kernel: tsc: Detected 2794.748 MHz processor Sep 5 00:04:50.908658 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 5 00:04:50.908665 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 5 00:04:50.908672 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 5 00:04:50.908679 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 5 00:04:50.908686 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 5 00:04:50.908696 kernel: Using GB pages for direct mapping Sep 5 00:04:50.908703 kernel: Secure boot disabled Sep 5 00:04:50.908710 kernel: ACPI: Early table checksum verification disabled Sep 5 00:04:50.908717 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 5 00:04:50.908728 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 5 00:04:50.908735 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.908743 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.908753 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 5 00:04:50.908760 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.908770 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.908778 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.908785 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 00:04:50.908792 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 5 00:04:50.908799 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 5 00:04:50.908809 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 5 00:04:50.908817 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 5 00:04:50.908824 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 5 00:04:50.908831 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 5 00:04:50.908838 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 5 00:04:50.908845 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 5 00:04:50.908853 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 5 00:04:50.908860 kernel: No NUMA configuration found Sep 5 00:04:50.908869 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 5 00:04:50.908879 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 5 00:04:50.908886 kernel: Zone ranges: Sep 5 00:04:50.908894 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 5 00:04:50.908901 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 5 00:04:50.908908 kernel: Normal empty Sep 5 00:04:50.908915 kernel: Movable zone start for each node Sep 5 00:04:50.908922 kernel: Early memory node ranges Sep 5 00:04:50.908929 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 5 00:04:50.908937 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 5 00:04:50.908944 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 5 00:04:50.908953 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 5 00:04:50.908961 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 5 00:04:50.908968 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 5 00:04:50.908990 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 5 00:04:50.908998 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:04:50.909005 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 5 00:04:50.909012 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 5 00:04:50.909020 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 5 00:04:50.909027 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 5 00:04:50.909038 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 5 00:04:50.909048 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 5 00:04:50.909058 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 5 00:04:50.909068 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 5 00:04:50.909077 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 5 00:04:50.909086 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 5 00:04:50.909096 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 5 00:04:50.909106 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 5 00:04:50.909114 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 5 00:04:50.909125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 5 00:04:50.909132 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 5 00:04:50.909139 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 5 00:04:50.909146 kernel: TSC deadline timer available Sep 5 00:04:50.909154 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 5 00:04:50.909164 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 5 00:04:50.909172 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 5 00:04:50.909181 kernel: kvm-guest: setup PV sched yield Sep 5 00:04:50.909190 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 5 00:04:50.909203 kernel: Booting paravirtualized kernel on KVM Sep 5 00:04:50.909212 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 5 00:04:50.909221 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 5 00:04:50.909230 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 5 00:04:50.909240 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 5 00:04:50.909248 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 5 00:04:50.909257 kernel: kvm-guest: PV spinlocks enabled Sep 5 00:04:50.909266 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 5 00:04:50.909287 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:04:50.909304 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 00:04:50.909314 kernel: random: crng init done Sep 5 00:04:50.909323 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 00:04:50.909333 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 00:04:50.909342 kernel: Fallback order for Node 0: 0 Sep 5 00:04:50.909351 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 5 00:04:50.909360 kernel: Policy zone: DMA32 Sep 5 00:04:50.909369 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 00:04:50.909381 kernel: Memory: 2400596K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42872K init, 2324K bss, 166144K reserved, 0K cma-reserved) Sep 5 00:04:50.909388 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 00:04:50.909395 kernel: ftrace: allocating 37969 entries in 149 pages Sep 5 00:04:50.909403 kernel: ftrace: allocated 149 pages with 4 groups Sep 5 00:04:50.909414 kernel: Dynamic Preempt: voluntary Sep 5 00:04:50.909435 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 00:04:50.909450 kernel: rcu: RCU event tracing is enabled. Sep 5 00:04:50.909461 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 00:04:50.909472 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 00:04:50.909483 kernel: Rude variant of Tasks RCU enabled. Sep 5 00:04:50.909493 kernel: Tracing variant of Tasks RCU enabled. Sep 5 00:04:50.909504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 00:04:50.909519 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 00:04:50.909529 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 5 00:04:50.909543 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 00:04:50.909553 kernel: Console: colour dummy device 80x25 Sep 5 00:04:50.909563 kernel: printk: console [ttyS0] enabled Sep 5 00:04:50.909576 kernel: ACPI: Core revision 20230628 Sep 5 00:04:50.909586 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 5 00:04:50.909597 kernel: APIC: Switch to symmetric I/O mode setup Sep 5 00:04:50.909607 kernel: x2apic enabled Sep 5 00:04:50.909617 kernel: APIC: Switched APIC routing to: physical x2apic Sep 5 00:04:50.909628 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 5 00:04:50.909639 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 5 00:04:50.909650 kernel: kvm-guest: setup PV IPIs Sep 5 00:04:50.909660 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 5 00:04:50.909674 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 5 00:04:50.909684 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 5 00:04:50.909694 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 5 00:04:50.909704 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 5 00:04:50.909715 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 5 00:04:50.909725 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 5 00:04:50.909735 kernel: Spectre V2 : Mitigation: Retpolines Sep 5 00:04:50.909746 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 5 00:04:50.909757 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 5 00:04:50.909769 kernel: active return thunk: retbleed_return_thunk Sep 5 00:04:50.909777 kernel: RETBleed: Mitigation: untrained return thunk Sep 5 00:04:50.909785 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 5 00:04:50.909792 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 5 00:04:50.909803 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 5 00:04:50.909812 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 5 00:04:50.909820 kernel: active return thunk: srso_return_thunk Sep 5 00:04:50.909828 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 5 00:04:50.909838 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 5 00:04:50.909846 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 5 00:04:50.909853 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 5 00:04:50.909861 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 5 00:04:50.909869 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 5 00:04:50.909876 kernel: Freeing SMP alternatives memory: 32K Sep 5 00:04:50.909884 kernel: pid_max: default: 32768 minimum: 301 Sep 5 00:04:50.909891 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 5 00:04:50.909899 kernel: landlock: Up and running. Sep 5 00:04:50.909909 kernel: SELinux: Initializing. Sep 5 00:04:50.909917 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:04:50.909924 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 00:04:50.909932 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 5 00:04:50.909940 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:04:50.909948 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:04:50.909955 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 00:04:50.909963 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 5 00:04:50.909971 kernel: ... version: 0 Sep 5 00:04:50.909995 kernel: ... bit width: 48 Sep 5 00:04:50.910003 kernel: ... generic registers: 6 Sep 5 00:04:50.910010 kernel: ... value mask: 0000ffffffffffff Sep 5 00:04:50.910018 kernel: ... max period: 00007fffffffffff Sep 5 00:04:50.910025 kernel: ... fixed-purpose events: 0 Sep 5 00:04:50.910033 kernel: ... event mask: 000000000000003f Sep 5 00:04:50.910041 kernel: signal: max sigframe size: 1776 Sep 5 00:04:50.910052 kernel: rcu: Hierarchical SRCU implementation. Sep 5 00:04:50.910063 kernel: rcu: Max phase no-delay instances is 400. Sep 5 00:04:50.910076 kernel: smp: Bringing up secondary CPUs ... Sep 5 00:04:50.910085 kernel: smpboot: x86: Booting SMP configuration: Sep 5 00:04:50.910095 kernel: .... node #0, CPUs: #1 #2 #3 Sep 5 00:04:50.910104 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 00:04:50.910114 kernel: smpboot: Max logical packages: 1 Sep 5 00:04:50.910123 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 5 00:04:50.910132 kernel: devtmpfs: initialized Sep 5 00:04:50.910142 kernel: x86/mm: Memory block size: 128MB Sep 5 00:04:50.910151 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 5 00:04:50.910164 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 5 00:04:50.910174 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 5 00:04:50.910183 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 5 00:04:50.910193 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 5 00:04:50.910202 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 00:04:50.910212 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 00:04:50.910221 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 00:04:50.910230 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 00:04:50.910238 kernel: audit: initializing netlink subsys (disabled) Sep 5 00:04:50.910248 kernel: audit: type=2000 audit(1757030690.021:1): state=initialized audit_enabled=0 res=1 Sep 5 00:04:50.910256 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 00:04:50.910263 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 5 00:04:50.910271 kernel: cpuidle: using governor menu Sep 5 00:04:50.910288 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 00:04:50.910296 kernel: dca service started, version 1.12.1 Sep 5 00:04:50.910303 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 5 00:04:50.910311 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 5 00:04:50.910319 kernel: PCI: Using configuration type 1 for base access Sep 5 00:04:50.910329 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 5 00:04:50.910337 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 00:04:50.910345 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 00:04:50.910352 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 00:04:50.910360 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 00:04:50.910367 kernel: ACPI: Added _OSI(Module Device) Sep 5 00:04:50.910375 kernel: ACPI: Added _OSI(Processor Device) Sep 5 00:04:50.910382 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 00:04:50.910390 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 00:04:50.910400 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 5 00:04:50.910408 kernel: ACPI: Interpreter enabled Sep 5 00:04:50.910415 kernel: ACPI: PM: (supports S0 S3 S5) Sep 5 00:04:50.910423 kernel: ACPI: Using IOAPIC for interrupt routing Sep 5 00:04:50.910430 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 5 00:04:50.910438 kernel: PCI: Using E820 reservations for host bridge windows Sep 5 00:04:50.910445 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 5 00:04:50.910453 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 00:04:50.910687 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 00:04:50.910865 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 5 00:04:50.911129 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 5 00:04:50.911148 kernel: PCI host bridge to bus 0000:00 Sep 5 00:04:50.911354 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 5 00:04:50.911481 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 5 00:04:50.911602 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 5 00:04:50.911753 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 5 00:04:50.911903 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 5 00:04:50.912074 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 5 00:04:50.912247 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 00:04:50.912478 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 5 00:04:50.912681 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 5 00:04:50.912858 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 5 00:04:50.913083 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 5 00:04:50.913293 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 5 00:04:50.913485 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 5 00:04:50.913656 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 5 00:04:50.913860 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 5 00:04:50.914061 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 5 00:04:50.914248 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 5 00:04:50.914437 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 5 00:04:50.914684 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 5 00:04:50.914853 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 5 00:04:50.915045 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 5 00:04:50.915245 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 5 00:04:50.915445 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 5 00:04:50.915622 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 5 00:04:50.915788 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 5 00:04:50.915958 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 5 00:04:50.916161 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 5 00:04:50.916382 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 5 00:04:50.916544 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 5 00:04:50.916724 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 5 00:04:50.916897 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 5 00:04:50.917099 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 5 00:04:50.917335 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 5 00:04:50.917500 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 5 00:04:50.917516 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 5 00:04:50.917527 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 5 00:04:50.917538 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 5 00:04:50.917555 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 5 00:04:50.917565 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 5 00:04:50.917576 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 5 00:04:50.917586 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 5 00:04:50.917597 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 5 00:04:50.917608 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 5 00:04:50.917618 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 5 00:04:50.917629 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 5 00:04:50.917639 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 5 00:04:50.917653 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 5 00:04:50.917664 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 5 00:04:50.917674 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 5 00:04:50.917684 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 5 00:04:50.917695 kernel: iommu: Default domain type: Translated Sep 5 00:04:50.917705 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 5 00:04:50.917716 kernel: efivars: Registered efivars operations Sep 5 00:04:50.917726 kernel: PCI: Using ACPI for IRQ routing Sep 5 00:04:50.917737 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 5 00:04:50.917748 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 5 00:04:50.917762 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 5 00:04:50.917772 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 5 00:04:50.917783 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 5 00:04:50.917943 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 5 00:04:50.918182 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 5 00:04:50.918358 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 5 00:04:50.918372 kernel: vgaarb: loaded Sep 5 00:04:50.918382 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 5 00:04:50.918397 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 5 00:04:50.918405 kernel: clocksource: Switched to clocksource kvm-clock Sep 5 00:04:50.918413 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 00:04:50.918421 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 00:04:50.918429 kernel: pnp: PnP ACPI init Sep 5 00:04:50.918658 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 5 00:04:50.918676 kernel: pnp: PnP ACPI: found 6 devices Sep 5 00:04:50.918687 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 5 00:04:50.918704 kernel: NET: Registered PF_INET protocol family Sep 5 00:04:50.918715 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 00:04:50.918726 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 00:04:50.918736 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 00:04:50.918747 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 00:04:50.918757 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 00:04:50.918768 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 00:04:50.918778 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:04:50.918788 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 00:04:50.918803 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 00:04:50.918813 kernel: NET: Registered PF_XDP protocol family Sep 5 00:04:50.918963 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 5 00:04:50.919125 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 5 00:04:50.919250 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 5 00:04:50.919378 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 5 00:04:50.919493 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 5 00:04:50.919609 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 5 00:04:50.919733 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 5 00:04:50.919858 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 5 00:04:50.919873 kernel: PCI: CLS 0 bytes, default 64 Sep 5 00:04:50.919884 kernel: Initialise system trusted keyrings Sep 5 00:04:50.919895 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 00:04:50.919907 kernel: Key type asymmetric registered Sep 5 00:04:50.919917 kernel: Asymmetric key parser 'x509' registered Sep 5 00:04:50.919928 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 5 00:04:50.919939 kernel: io scheduler mq-deadline registered Sep 5 00:04:50.919957 kernel: io scheduler kyber registered Sep 5 00:04:50.919968 kernel: io scheduler bfq registered Sep 5 00:04:50.920015 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 5 00:04:50.920026 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 5 00:04:50.920037 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 5 00:04:50.920047 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 5 00:04:50.920058 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 00:04:50.920069 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 5 00:04:50.920080 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 5 00:04:50.920098 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 5 00:04:50.920111 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 5 00:04:50.920348 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 5 00:04:50.920362 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 5 00:04:50.920481 kernel: rtc_cmos 00:04: registered as rtc0 Sep 5 00:04:50.920599 kernel: rtc_cmos 00:04: setting system clock to 2025-09-05T00:04:50 UTC (1757030690) Sep 5 00:04:50.920715 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 5 00:04:50.920725 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 5 00:04:50.920738 kernel: efifb: probing for efifb Sep 5 00:04:50.920746 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 5 00:04:50.920754 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 5 00:04:50.920761 kernel: efifb: scrolling: redraw Sep 5 00:04:50.920769 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 5 00:04:50.920777 kernel: Console: switching to colour frame buffer device 100x37 Sep 5 00:04:50.920803 kernel: fb0: EFI VGA frame buffer device Sep 5 00:04:50.920814 kernel: pstore: Using crash dump compression: deflate Sep 5 00:04:50.920822 kernel: pstore: Registered efi_pstore as persistent store backend Sep 5 00:04:50.920832 kernel: NET: Registered PF_INET6 protocol family Sep 5 00:04:50.920840 kernel: Segment Routing with IPv6 Sep 5 00:04:50.920848 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 00:04:50.920856 kernel: NET: Registered PF_PACKET protocol family Sep 5 00:04:50.920864 kernel: Key type dns_resolver registered Sep 5 00:04:50.920871 kernel: IPI shorthand broadcast: enabled Sep 5 00:04:50.920879 kernel: sched_clock: Marking stable (876002286, 109984334)->(1001864211, -15877591) Sep 5 00:04:50.920888 kernel: registered taskstats version 1 Sep 5 00:04:50.920896 kernel: Loading compiled-in X.509 certificates Sep 5 00:04:50.920907 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: fbb6a9f06c02a4dbdf06d4c5d95c782040e8492c' Sep 5 00:04:50.920915 kernel: Key type .fscrypt registered Sep 5 00:04:50.920923 kernel: Key type fscrypt-provisioning registered Sep 5 00:04:50.920931 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 00:04:50.920939 kernel: ima: Allocated hash algorithm: sha1 Sep 5 00:04:50.920947 kernel: ima: No architecture policies found Sep 5 00:04:50.920954 kernel: clk: Disabling unused clocks Sep 5 00:04:50.920963 kernel: Freeing unused kernel image (initmem) memory: 42872K Sep 5 00:04:50.920973 kernel: Write protecting the kernel read-only data: 36864k Sep 5 00:04:50.921041 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 5 00:04:50.921052 kernel: Run /init as init process Sep 5 00:04:50.921063 kernel: with arguments: Sep 5 00:04:50.921074 kernel: /init Sep 5 00:04:50.921082 kernel: with environment: Sep 5 00:04:50.921089 kernel: HOME=/ Sep 5 00:04:50.921097 kernel: TERM=linux Sep 5 00:04:50.921105 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 00:04:50.921121 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:04:50.921134 systemd[1]: Detected virtualization kvm. Sep 5 00:04:50.921146 systemd[1]: Detected architecture x86-64. Sep 5 00:04:50.921157 systemd[1]: Running in initrd. Sep 5 00:04:50.921176 systemd[1]: No hostname configured, using default hostname. Sep 5 00:04:50.921188 systemd[1]: Hostname set to . Sep 5 00:04:50.921202 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:04:50.921217 systemd[1]: Queued start job for default target initrd.target. Sep 5 00:04:50.921232 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:04:50.921247 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:04:50.921263 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 00:04:50.921289 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:04:50.921311 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 00:04:50.921326 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 00:04:50.921345 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 00:04:50.921360 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 00:04:50.921374 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:04:50.921389 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:04:50.921403 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:04:50.921420 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:04:50.921431 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:04:50.921443 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:04:50.921454 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:04:50.921473 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:04:50.921484 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 00:04:50.921496 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 5 00:04:50.921507 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:04:50.921518 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:04:50.921534 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:04:50.921545 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:04:50.921556 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 00:04:50.921567 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:04:50.921578 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 00:04:50.921589 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 00:04:50.921601 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:04:50.921612 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:04:50.921631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:04:50.921644 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 00:04:50.921655 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:04:50.921666 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 00:04:50.921706 systemd-journald[193]: Collecting audit messages is disabled. Sep 5 00:04:50.921738 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:04:50.921750 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:04:50.921761 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:04:50.921773 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:04:50.921790 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:04:50.921801 systemd-journald[193]: Journal started Sep 5 00:04:50.921819 systemd-journald[193]: Runtime Journal (/run/log/journal/eab9cb05eb834a0dafa8b04971e4380f) is 6.0M, max 48.3M, 42.2M free. Sep 5 00:04:50.902969 systemd-modules-load[194]: Inserted module 'overlay' Sep 5 00:04:50.924995 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:04:50.927471 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:04:50.930449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:04:50.937009 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 00:04:50.939016 kernel: Bridge firewalling registered Sep 5 00:04:50.938745 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 5 00:04:50.938967 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:04:50.942739 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:04:50.945081 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:04:50.961130 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 00:04:50.963730 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:04:50.974160 dracut-cmdline[222]: dracut-dracut-053 Sep 5 00:04:50.977749 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:04:50.980477 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=539572d827c6f3583460e612b4909ac43a0adb56b076565948077ad2e9caeea5 Sep 5 00:04:50.986144 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:04:51.022941 systemd-resolved[238]: Positive Trust Anchors: Sep 5 00:04:51.022963 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:04:51.023016 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:04:51.026190 systemd-resolved[238]: Defaulting to hostname 'linux'. Sep 5 00:04:51.027535 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:04:51.032893 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:04:51.077017 kernel: SCSI subsystem initialized Sep 5 00:04:51.087001 kernel: Loading iSCSI transport class v2.0-870. Sep 5 00:04:51.098008 kernel: iscsi: registered transport (tcp) Sep 5 00:04:51.119030 kernel: iscsi: registered transport (qla4xxx) Sep 5 00:04:51.119056 kernel: QLogic iSCSI HBA Driver Sep 5 00:04:51.176325 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 00:04:51.183211 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 00:04:51.210021 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 00:04:51.210053 kernel: device-mapper: uevent: version 1.0.3 Sep 5 00:04:51.210064 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 5 00:04:51.255001 kernel: raid6: avx2x4 gen() 27669 MB/s Sep 5 00:04:51.272004 kernel: raid6: avx2x2 gen() 26800 MB/s Sep 5 00:04:51.289110 kernel: raid6: avx2x1 gen() 24688 MB/s Sep 5 00:04:51.289125 kernel: raid6: using algorithm avx2x4 gen() 27669 MB/s Sep 5 00:04:51.307036 kernel: raid6: .... xor() 7691 MB/s, rmw enabled Sep 5 00:04:51.307053 kernel: raid6: using avx2x2 recovery algorithm Sep 5 00:04:51.329022 kernel: xor: automatically using best checksumming function avx Sep 5 00:04:51.506027 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 00:04:51.521575 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:04:51.533259 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:04:51.551647 systemd-udevd[413]: Using default interface naming scheme 'v255'. Sep 5 00:04:51.558359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:04:51.572190 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 00:04:51.589915 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Sep 5 00:04:51.628532 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:04:51.644155 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:04:51.716963 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:04:51.724143 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 00:04:51.741331 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 00:04:51.744146 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:04:51.746800 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:04:51.749213 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:04:51.763034 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 5 00:04:51.763311 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 00:04:51.770464 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 00:04:51.772649 kernel: cryptd: max_cpu_qlen set to 1000 Sep 5 00:04:51.779547 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 00:04:51.779582 kernel: GPT:9289727 != 19775487 Sep 5 00:04:51.779598 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 00:04:51.779621 kernel: GPT:9289727 != 19775487 Sep 5 00:04:51.780477 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 00:04:51.780504 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:04:51.783854 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:04:51.793612 kernel: AVX2 version of gcm_enc/dec engaged. Sep 5 00:04:51.793658 kernel: AES CTR mode by8 optimization enabled Sep 5 00:04:51.793674 kernel: libata version 3.00 loaded. Sep 5 00:04:51.804002 kernel: ahci 0000:00:1f.2: version 3.0 Sep 5 00:04:51.810053 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 5 00:04:51.812590 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 5 00:04:51.812851 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 5 00:04:51.817014 kernel: scsi host0: ahci Sep 5 00:04:51.817278 kernel: scsi host1: ahci Sep 5 00:04:51.819251 kernel: scsi host2: ahci Sep 5 00:04:51.819534 kernel: scsi host3: ahci Sep 5 00:04:51.819357 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:04:51.822667 kernel: scsi host4: ahci Sep 5 00:04:51.822900 kernel: scsi host5: ahci Sep 5 00:04:51.819482 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:04:51.837692 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 5 00:04:51.837716 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 5 00:04:51.837727 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 5 00:04:51.837737 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 5 00:04:51.837748 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 5 00:04:51.837759 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 5 00:04:51.837769 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (462) Sep 5 00:04:51.837780 kernel: BTRFS: device fsid 3713859d-e283-4add-80dc-7ca8465b1d1d devid 1 transid 33 /dev/vda3 scanned by (udev-worker) (457) Sep 5 00:04:51.821583 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:04:51.833481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:04:51.833668 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:04:51.834026 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:04:51.842241 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:04:51.857868 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:04:51.864510 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 00:04:51.871939 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 00:04:51.878379 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:04:51.884852 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 00:04:51.886133 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 00:04:51.900198 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 00:04:51.901395 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:04:51.901461 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:04:51.903697 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:04:51.906864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:04:51.913019 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:04:51.913312 disk-uuid[554]: Primary Header is updated. Sep 5 00:04:51.913312 disk-uuid[554]: Secondary Entries is updated. Sep 5 00:04:51.913312 disk-uuid[554]: Secondary Header is updated. Sep 5 00:04:51.930852 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:04:51.940148 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 00:04:51.972844 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:04:52.136022 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 5 00:04:52.136109 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 5 00:04:52.136122 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 5 00:04:52.136135 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 5 00:04:52.137001 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 5 00:04:52.138008 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 5 00:04:52.139012 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 5 00:04:52.139025 kernel: ata3.00: applying bridge limits Sep 5 00:04:52.140023 kernel: ata3.00: configured for UDMA/100 Sep 5 00:04:52.141009 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 5 00:04:52.189547 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 5 00:04:52.189796 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 5 00:04:52.202008 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 5 00:04:52.924022 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 00:04:52.924827 disk-uuid[558]: The operation has completed successfully. Sep 5 00:04:52.952645 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 00:04:52.952778 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 00:04:52.981242 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 00:04:52.984574 sh[599]: Success Sep 5 00:04:52.999028 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 5 00:04:53.045872 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 00:04:53.060567 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 00:04:53.064147 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 00:04:53.084023 kernel: BTRFS info (device dm-0): first mount of filesystem 3713859d-e283-4add-80dc-7ca8465b1d1d Sep 5 00:04:53.084075 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:04:53.086328 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 5 00:04:53.086363 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 00:04:53.087210 kernel: BTRFS info (device dm-0): using free space tree Sep 5 00:04:53.097521 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 00:04:53.098501 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 00:04:53.107302 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 00:04:53.110811 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 00:04:53.125443 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:04:53.125515 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:04:53.125531 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:04:53.130016 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:04:53.144510 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 5 00:04:53.147543 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:04:53.161352 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 00:04:53.166381 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 00:04:53.247004 ignition[693]: Ignition 2.19.0 Sep 5 00:04:53.247022 ignition[693]: Stage: fetch-offline Sep 5 00:04:53.247076 ignition[693]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:53.247093 ignition[693]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:53.247245 ignition[693]: parsed url from cmdline: "" Sep 5 00:04:53.247251 ignition[693]: no config URL provided Sep 5 00:04:53.247260 ignition[693]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 00:04:53.247277 ignition[693]: no config at "/usr/lib/ignition/user.ign" Sep 5 00:04:53.247318 ignition[693]: op(1): [started] loading QEMU firmware config module Sep 5 00:04:53.247326 ignition[693]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 00:04:53.259005 ignition[693]: op(1): [finished] loading QEMU firmware config module Sep 5 00:04:53.298418 ignition[693]: parsing config with SHA512: 36a22e168e9d775d68c0476886f018c4ccab6fe4a5a2d59adb3e9ea230b5967b4a44a9657db310b908b1fbbc0e8e928cd4fc06af871dbd858305f07592bec5d4 Sep 5 00:04:53.299017 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:04:53.306013 unknown[693]: fetched base config from "system" Sep 5 00:04:53.306553 ignition[693]: fetch-offline: fetch-offline passed Sep 5 00:04:53.306030 unknown[693]: fetched user config from "qemu" Sep 5 00:04:53.306649 ignition[693]: Ignition finished successfully Sep 5 00:04:53.313399 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:04:53.316097 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:04:53.348900 systemd-networkd[787]: lo: Link UP Sep 5 00:04:53.348914 systemd-networkd[787]: lo: Gained carrier Sep 5 00:04:53.352651 systemd-networkd[787]: Enumeration completed Sep 5 00:04:53.353090 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:04:53.354058 systemd[1]: Reached target network.target - Network. Sep 5 00:04:53.354493 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 00:04:53.360048 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:04:53.360059 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:04:53.364969 systemd-networkd[787]: eth0: Link UP Sep 5 00:04:53.365002 systemd-networkd[787]: eth0: Gained carrier Sep 5 00:04:53.365016 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:04:53.367288 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 00:04:53.469175 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:04:53.509910 ignition[790]: Ignition 2.19.0 Sep 5 00:04:53.509925 ignition[790]: Stage: kargs Sep 5 00:04:53.511088 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:53.511107 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:53.512065 ignition[790]: kargs: kargs passed Sep 5 00:04:53.512139 ignition[790]: Ignition finished successfully Sep 5 00:04:53.517952 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 00:04:53.529182 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 00:04:53.545564 ignition[799]: Ignition 2.19.0 Sep 5 00:04:53.545578 ignition[799]: Stage: disks Sep 5 00:04:53.545802 ignition[799]: no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:53.545840 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:53.547668 ignition[799]: disks: disks passed Sep 5 00:04:53.549848 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 00:04:53.547754 ignition[799]: Ignition finished successfully Sep 5 00:04:53.552387 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 00:04:53.554855 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 00:04:53.557945 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:04:53.560274 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:04:53.562824 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:04:53.581141 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 00:04:53.603378 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 5 00:04:53.611293 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 00:04:53.624301 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 00:04:53.741124 kernel: EXT4-fs (vda9): mounted filesystem 83287606-d110-4d13-a801-c8d88205bd5a r/w with ordered data mode. Quota mode: none. Sep 5 00:04:53.742221 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 00:04:53.743314 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 00:04:53.755142 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:04:53.757471 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 00:04:53.757996 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 00:04:53.758055 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 00:04:53.769564 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Sep 5 00:04:53.769590 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:04:53.769603 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:04:53.769614 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:04:53.758098 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:04:53.771397 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 00:04:53.774280 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:04:53.774542 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 00:04:53.778239 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:04:53.877540 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 00:04:53.886677 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Sep 5 00:04:53.893603 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 00:04:53.900169 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 00:04:54.009551 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 00:04:54.022072 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 00:04:54.023694 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 00:04:54.031079 kernel: BTRFS info (device vda6): last unmount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:04:54.087701 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 00:04:54.093614 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 00:04:54.107400 ignition[930]: INFO : Ignition 2.19.0 Sep 5 00:04:54.107400 ignition[930]: INFO : Stage: mount Sep 5 00:04:54.109209 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:54.109209 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:54.109209 ignition[930]: INFO : mount: mount passed Sep 5 00:04:54.109209 ignition[930]: INFO : Ignition finished successfully Sep 5 00:04:54.111291 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 00:04:54.126098 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 00:04:54.133379 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 00:04:54.146375 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Sep 5 00:04:54.146404 kernel: BTRFS info (device vda6): first mount of filesystem 7246102b-8cb9-4a2f-9573-d0819df5c4dd Sep 5 00:04:54.146416 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 5 00:04:54.147993 kernel: BTRFS info (device vda6): using free space tree Sep 5 00:04:54.151000 kernel: BTRFS info (device vda6): auto enabling async discard Sep 5 00:04:54.152086 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 00:04:54.179513 ignition[960]: INFO : Ignition 2.19.0 Sep 5 00:04:54.179513 ignition[960]: INFO : Stage: files Sep 5 00:04:54.181230 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:54.181230 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:54.181230 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Sep 5 00:04:54.181230 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 00:04:54.181230 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 00:04:54.187880 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 00:04:54.187880 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 00:04:54.187880 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 00:04:54.187880 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 5 00:04:54.187880 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 5 00:04:54.184777 unknown[960]: wrote ssh authorized keys file for user: core Sep 5 00:04:54.264876 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 00:04:55.000498 systemd-networkd[787]: eth0: Gained IPv6LL Sep 5 00:04:55.855432 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 5 00:04:55.865357 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:04:55.873338 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:04:56.026512 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 5 00:04:56.528321 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 00:04:57.985789 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 5 00:04:57.985789 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 00:04:57.992213 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:04:57.994946 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 00:04:57.994946 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 00:04:57.994946 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 00:04:57.994946 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:04:57.994946 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 00:04:57.994946 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 00:04:57.994946 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 00:04:58.163264 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:04:58.172827 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 00:04:58.175203 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 00:04:58.175203 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 00:04:58.180777 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 00:04:58.185764 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:04:58.191092 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 00:04:58.191092 ignition[960]: INFO : files: files passed Sep 5 00:04:58.196966 ignition[960]: INFO : Ignition finished successfully Sep 5 00:04:58.203694 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 00:04:58.216796 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 00:04:58.231362 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 00:04:58.240929 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 00:04:58.241444 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 00:04:58.251655 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 00:04:58.259529 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:04:58.259529 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:04:58.265860 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 00:04:58.268954 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:04:58.270570 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 00:04:58.282855 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 00:04:58.337204 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 00:04:58.337874 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 00:04:58.342130 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 00:04:58.344160 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 00:04:58.351427 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 00:04:58.364295 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 00:04:58.393193 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:04:58.417629 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 00:04:58.456071 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:04:58.459143 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:04:58.476462 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 00:04:58.479364 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 00:04:58.481898 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 00:04:58.488387 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 00:04:58.495056 systemd[1]: Stopped target basic.target - Basic System. Sep 5 00:04:58.497524 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 00:04:58.507567 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 00:04:58.509337 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 00:04:58.521424 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 00:04:58.522995 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 00:04:58.524682 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 00:04:58.533622 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 00:04:58.539248 systemd[1]: Stopped target swap.target - Swaps. Sep 5 00:04:58.544012 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 00:04:58.544311 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 00:04:58.551488 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:04:58.554234 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:04:58.560398 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 00:04:58.571545 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:04:58.575615 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 00:04:58.575884 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 00:04:58.581264 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 00:04:58.581487 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 00:04:58.583223 systemd[1]: Stopped target paths.target - Path Units. Sep 5 00:04:58.593508 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 00:04:58.596056 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:04:58.602758 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 00:04:58.604144 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 00:04:58.607652 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 00:04:58.608386 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 00:04:58.609741 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 00:04:58.609885 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 00:04:58.612284 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 00:04:58.612525 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 00:04:58.615911 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 00:04:58.616111 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 00:04:58.629482 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 00:04:58.633509 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 00:04:58.634618 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 00:04:58.634806 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:04:58.637357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 00:04:58.637584 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 00:04:58.651948 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 00:04:58.652185 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 00:04:58.671963 ignition[1014]: INFO : Ignition 2.19.0 Sep 5 00:04:58.671963 ignition[1014]: INFO : Stage: umount Sep 5 00:04:58.674446 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 00:04:58.674446 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 00:04:58.679221 ignition[1014]: INFO : umount: umount passed Sep 5 00:04:58.679221 ignition[1014]: INFO : Ignition finished successfully Sep 5 00:04:58.680317 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 00:04:58.680553 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 00:04:58.683145 systemd[1]: Stopped target network.target - Network. Sep 5 00:04:58.684910 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 00:04:58.685033 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 00:04:58.689134 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 00:04:58.689223 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 00:04:58.691331 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 00:04:58.691415 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 00:04:58.696283 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 00:04:58.696382 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 00:04:58.700241 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 00:04:58.705163 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 00:04:58.712655 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 00:04:58.717113 systemd-networkd[787]: eth0: DHCPv6 lease lost Sep 5 00:04:58.720177 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 00:04:58.720408 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 00:04:58.724616 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 00:04:58.724750 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:04:58.733164 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 00:04:58.735801 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 00:04:58.735946 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 00:04:58.742877 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:04:58.754863 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 00:04:58.755126 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 00:04:58.766708 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 00:04:58.766864 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:04:58.770390 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 00:04:58.770487 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 00:04:58.773583 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 00:04:58.773670 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:04:58.780192 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 00:04:58.780461 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:04:58.784307 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 00:04:58.784434 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 00:04:58.787194 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 00:04:58.787306 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:04:58.790534 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 00:04:58.790690 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 00:04:58.795018 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 00:04:58.795117 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 00:04:58.795949 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 00:04:58.796051 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 00:04:58.832566 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 00:04:58.832717 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 00:04:58.832811 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:04:58.845249 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 5 00:04:58.845386 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:04:58.847315 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 00:04:58.847443 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:04:58.850299 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:04:58.850407 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:04:58.856026 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 00:04:58.856222 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 00:04:58.861665 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 00:04:58.861814 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 00:04:58.912611 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 00:04:58.912843 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 00:04:58.919818 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 00:04:58.921282 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 00:04:58.921420 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 00:04:58.940222 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 00:04:58.956005 systemd[1]: Switching root. Sep 5 00:04:59.005756 systemd-journald[193]: Journal stopped Sep 5 00:05:00.321256 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Sep 5 00:05:00.321354 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 00:05:00.321381 kernel: SELinux: policy capability open_perms=1 Sep 5 00:05:00.321398 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 00:05:00.321414 kernel: SELinux: policy capability always_check_network=0 Sep 5 00:05:00.321429 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 00:05:00.321445 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 00:05:00.321460 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 00:05:00.321492 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 00:05:00.321509 kernel: audit: type=1403 audit(1757030699.398:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 00:05:00.321526 systemd[1]: Successfully loaded SELinux policy in 63.554ms. Sep 5 00:05:00.321558 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.604ms. Sep 5 00:05:00.321576 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 5 00:05:00.321593 systemd[1]: Detected virtualization kvm. Sep 5 00:05:00.321610 systemd[1]: Detected architecture x86-64. Sep 5 00:05:00.321626 systemd[1]: Detected first boot. Sep 5 00:05:00.321643 systemd[1]: Initializing machine ID from VM UUID. Sep 5 00:05:00.321668 zram_generator::config[1059]: No configuration found. Sep 5 00:05:00.321687 systemd[1]: Populated /etc with preset unit settings. Sep 5 00:05:00.321704 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 00:05:00.321720 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 00:05:00.321736 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 00:05:00.321765 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 00:05:00.321783 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 00:05:00.321801 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 00:05:00.321828 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 00:05:00.321846 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 00:05:00.321863 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 00:05:00.321880 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 00:05:00.321896 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 00:05:00.321913 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 00:05:00.321930 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 00:05:00.321950 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 00:05:00.321967 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 00:05:00.322038 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 00:05:00.322058 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 00:05:00.322075 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 5 00:05:00.322092 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 00:05:00.322108 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 00:05:00.322124 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 00:05:00.322141 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 00:05:00.322167 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 00:05:00.322184 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 00:05:00.322201 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 00:05:00.322218 systemd[1]: Reached target slices.target - Slice Units. Sep 5 00:05:00.322234 systemd[1]: Reached target swap.target - Swaps. Sep 5 00:05:00.322251 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 00:05:00.322279 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 00:05:00.322296 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 00:05:00.322312 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 00:05:00.322329 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 00:05:00.322355 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 00:05:00.322373 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 00:05:00.322389 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 00:05:00.322406 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 00:05:00.322423 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:00.322440 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 00:05:00.322457 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 00:05:00.322474 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 00:05:00.322499 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 00:05:00.322516 systemd[1]: Reached target machines.target - Containers. Sep 5 00:05:00.322532 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 00:05:00.322549 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:05:00.322565 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 00:05:00.322581 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 00:05:00.322596 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:05:00.322613 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:05:00.322629 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:05:00.322663 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 00:05:00.322680 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:05:00.322697 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 00:05:00.322721 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 00:05:00.322737 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 00:05:00.322754 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 00:05:00.322770 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 00:05:00.322786 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 00:05:00.322816 kernel: loop: module loaded Sep 5 00:05:00.322834 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 00:05:00.322851 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 00:05:00.322868 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 00:05:00.322885 kernel: ACPI: bus type drm_connector registered Sep 5 00:05:00.322900 kernel: fuse: init (API version 7.39) Sep 5 00:05:00.322917 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 00:05:00.322933 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 00:05:00.322950 systemd[1]: Stopped verity-setup.service. Sep 5 00:05:00.322967 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:00.323013 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 00:05:00.323066 systemd-journald[1129]: Collecting audit messages is disabled. Sep 5 00:05:00.323098 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 00:05:00.323137 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 00:05:00.323156 systemd-journald[1129]: Journal started Sep 5 00:05:00.323185 systemd-journald[1129]: Runtime Journal (/run/log/journal/eab9cb05eb834a0dafa8b04971e4380f) is 6.0M, max 48.3M, 42.2M free. Sep 5 00:05:00.077736 systemd[1]: Queued start job for default target multi-user.target. Sep 5 00:05:00.100280 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 00:05:00.100776 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 00:05:00.325268 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 00:05:00.326821 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 00:05:00.328090 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 00:05:00.329561 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 00:05:00.330790 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 00:05:00.332240 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 00:05:00.333770 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 00:05:00.333964 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 00:05:00.335454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:05:00.335643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:05:00.337282 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:05:00.337479 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:05:00.338844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:05:00.339103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:05:00.340617 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 00:05:00.340810 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 00:05:00.342204 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:05:00.342394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:05:00.343777 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 00:05:00.345210 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 00:05:00.346760 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 00:05:00.361478 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 00:05:00.369054 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 00:05:00.371569 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 00:05:00.372889 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 00:05:00.372919 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 00:05:00.375212 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 5 00:05:00.377797 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 00:05:00.381099 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 00:05:00.382457 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:05:00.386942 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 00:05:00.391243 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 00:05:00.392442 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:05:00.393684 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 00:05:00.395053 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:05:00.401132 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 00:05:00.406223 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 00:05:00.449702 systemd-journald[1129]: Time spent on flushing to /var/log/journal/eab9cb05eb834a0dafa8b04971e4380f is 33.824ms for 995 entries. Sep 5 00:05:00.449702 systemd-journald[1129]: System Journal (/var/log/journal/eab9cb05eb834a0dafa8b04971e4380f) is 8.0M, max 195.6M, 187.6M free. Sep 5 00:05:00.509898 systemd-journald[1129]: Received client request to flush runtime journal. Sep 5 00:05:00.509945 kernel: loop0: detected capacity change from 0 to 140768 Sep 5 00:05:00.509967 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 00:05:00.455191 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 00:05:00.460291 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 00:05:00.469414 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 00:05:00.474168 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 00:05:00.476776 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 00:05:00.485462 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 00:05:00.489697 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 00:05:00.508209 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 5 00:05:00.513682 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 00:05:00.521838 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 00:05:00.524432 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 5 00:05:00.524456 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 5 00:05:00.536296 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 5 00:05:00.538540 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 00:05:00.542226 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 00:05:00.544241 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 5 00:05:00.548085 kernel: loop1: detected capacity change from 0 to 142488 Sep 5 00:05:00.552180 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 00:05:00.556078 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 5 00:05:00.606500 kernel: loop2: detected capacity change from 0 to 221472 Sep 5 00:05:00.610391 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 00:05:00.618810 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 00:05:00.655556 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Sep 5 00:05:00.655578 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Sep 5 00:05:00.663458 kernel: loop3: detected capacity change from 0 to 140768 Sep 5 00:05:00.665103 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 00:05:00.677039 kernel: loop4: detected capacity change from 0 to 142488 Sep 5 00:05:00.692031 kernel: loop5: detected capacity change from 0 to 221472 Sep 5 00:05:00.696454 (sd-merge)[1199]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 00:05:00.697070 (sd-merge)[1199]: Merged extensions into '/usr'. Sep 5 00:05:00.724594 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 00:05:00.724612 systemd[1]: Reloading... Sep 5 00:05:00.822036 zram_generator::config[1222]: No configuration found. Sep 5 00:05:00.990489 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 00:05:01.000668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:05:01.051511 systemd[1]: Reloading finished in 326 ms. Sep 5 00:05:01.090360 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 00:05:01.091971 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 00:05:01.129701 systemd[1]: Starting ensure-sysext.service... Sep 5 00:05:01.133102 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 00:05:01.140330 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Sep 5 00:05:01.140353 systemd[1]: Reloading... Sep 5 00:05:01.164386 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 00:05:01.164808 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 00:05:01.165901 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 00:05:01.166250 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Sep 5 00:05:01.166334 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Sep 5 00:05:01.170641 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:05:01.170655 systemd-tmpfiles[1264]: Skipping /boot Sep 5 00:05:01.190941 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 00:05:01.191181 systemd-tmpfiles[1264]: Skipping /boot Sep 5 00:05:01.214010 zram_generator::config[1294]: No configuration found. Sep 5 00:05:01.332446 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:05:01.384261 systemd[1]: Reloading finished in 243 ms. Sep 5 00:05:01.406386 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 00:05:01.418487 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 00:05:01.427482 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:05:01.430153 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 00:05:01.432595 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 00:05:01.436768 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 00:05:01.442286 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 00:05:01.445759 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 00:05:01.451354 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:01.451524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:05:01.457193 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:05:01.464108 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:05:01.475217 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:05:01.476408 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:05:01.481138 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 00:05:01.482201 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:01.483745 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 00:05:01.485727 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:05:01.485937 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:05:01.486381 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Sep 5 00:05:01.487824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:05:01.488100 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:05:01.489854 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:05:01.490063 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:05:01.494314 augenrules[1355]: No rules Sep 5 00:05:01.495544 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:05:01.502233 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:01.502451 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:05:01.512469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:05:01.515181 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:05:01.522559 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:05:01.523652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:05:01.525844 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 00:05:01.527059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:01.528594 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 00:05:01.530899 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 00:05:01.533264 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 00:05:01.535378 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:05:01.535652 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:05:01.537380 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 00:05:01.539526 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:05:01.539712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:05:01.542627 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:05:01.542801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:05:01.564741 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:01.565054 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 00:05:01.575331 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 00:05:01.581264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 00:05:01.588322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 00:05:01.591207 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 00:05:01.596374 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 00:05:01.597016 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1390) Sep 5 00:05:01.605152 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 00:05:01.606284 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 00:05:01.606310 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 5 00:05:01.607128 systemd[1]: Finished ensure-sysext.service. Sep 5 00:05:01.609204 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 00:05:01.611399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 00:05:01.611576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 00:05:01.613457 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 00:05:01.613637 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 00:05:01.615890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 00:05:01.616109 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 00:05:01.618277 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 00:05:01.618461 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 00:05:01.639855 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 00:05:01.639920 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 00:05:01.688424 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 00:05:01.695630 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 00:05:01.698369 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 5 00:05:01.700421 systemd-resolved[1334]: Positive Trust Anchors: Sep 5 00:05:01.700432 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 00:05:01.700463 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 00:05:01.705163 systemd-resolved[1334]: Defaulting to hostname 'linux'. Sep 5 00:05:01.709182 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 00:05:01.711778 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 00:05:01.713102 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 00:05:01.739407 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 00:05:01.820033 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 5 00:05:01.830011 kernel: ACPI: button: Power Button [PWRF] Sep 5 00:05:01.832925 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 00:05:01.834674 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 00:05:01.839242 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 5 00:05:01.846774 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 5 00:05:01.846949 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 5 00:05:01.849340 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 5 00:05:01.842161 systemd-networkd[1402]: lo: Link UP Sep 5 00:05:01.842167 systemd-networkd[1402]: lo: Gained carrier Sep 5 00:05:01.850424 systemd-networkd[1402]: Enumeration completed Sep 5 00:05:01.850911 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 00:05:01.852891 systemd[1]: Reached target network.target - Network. Sep 5 00:05:01.854139 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:05:01.854150 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 00:05:01.855152 systemd-networkd[1402]: eth0: Link UP Sep 5 00:05:01.855157 systemd-networkd[1402]: eth0: Gained carrier Sep 5 00:05:01.855180 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 00:05:01.864226 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 00:05:01.868053 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 00:05:01.868908 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Sep 5 00:05:01.873398 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 5 00:05:01.873218 systemd-timesyncd[1411]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 00:05:01.873298 systemd-timesyncd[1411]: Initial clock synchronization to Fri 2025-09-05 00:05:02.214648 UTC. Sep 5 00:05:01.916114 kernel: mousedev: PS/2 mouse device common for all mice Sep 5 00:05:01.923428 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:05:01.937966 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 00:05:01.938295 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:05:01.994289 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 00:05:02.008322 kernel: kvm_amd: TSC scaling supported Sep 5 00:05:02.008389 kernel: kvm_amd: Nested Virtualization enabled Sep 5 00:05:02.008431 kernel: kvm_amd: Nested Paging enabled Sep 5 00:05:02.009313 kernel: kvm_amd: LBR virtualization supported Sep 5 00:05:02.009340 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 5 00:05:02.010345 kernel: kvm_amd: Virtual GIF supported Sep 5 00:05:02.035410 kernel: EDAC MC: Ver: 3.0.0 Sep 5 00:05:02.070749 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 00:05:02.072535 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 5 00:05:02.085252 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 5 00:05:02.108153 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:05:02.143198 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 5 00:05:02.144775 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 00:05:02.145970 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 00:05:02.147284 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 00:05:02.148658 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 00:05:02.150205 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 00:05:02.151698 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 00:05:02.153033 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 00:05:02.154335 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 00:05:02.154364 systemd[1]: Reached target paths.target - Path Units. Sep 5 00:05:02.155382 systemd[1]: Reached target timers.target - Timer Units. Sep 5 00:05:02.157364 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 00:05:02.160446 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 00:05:02.170466 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 00:05:02.173087 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 5 00:05:02.174728 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 00:05:02.175959 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 00:05:02.176966 systemd[1]: Reached target basic.target - Basic System. Sep 5 00:05:02.178046 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:05:02.178076 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 00:05:02.179216 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 00:05:02.181447 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 00:05:02.185710 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 5 00:05:02.186134 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 00:05:02.190535 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 00:05:02.193037 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 00:05:02.194309 jq[1446]: false Sep 5 00:05:02.195226 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 00:05:02.197704 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 00:05:02.200959 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 00:05:02.203992 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 00:05:02.209982 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 00:05:02.211995 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 00:05:02.212734 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 00:05:02.214085 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 00:05:02.218731 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 00:05:02.221419 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 5 00:05:02.224568 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 00:05:02.224946 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 00:05:02.228383 jq[1457]: true Sep 5 00:05:02.230797 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 00:05:02.231094 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 00:05:02.239383 extend-filesystems[1447]: Found loop3 Sep 5 00:05:02.254502 extend-filesystems[1447]: Found loop4 Sep 5 00:05:02.254502 extend-filesystems[1447]: Found loop5 Sep 5 00:05:02.254502 extend-filesystems[1447]: Found sr0 Sep 5 00:05:02.254502 extend-filesystems[1447]: Found vda Sep 5 00:05:02.254502 extend-filesystems[1447]: Found vda1 Sep 5 00:05:02.254502 extend-filesystems[1447]: Found vda2 Sep 5 00:05:02.254502 extend-filesystems[1447]: Found vda3 Sep 5 00:05:02.254502 extend-filesystems[1447]: Found usr Sep 5 00:05:02.254502 extend-filesystems[1447]: Found vda4 Sep 5 00:05:02.254502 extend-filesystems[1447]: Found vda6 Sep 5 00:05:02.254502 extend-filesystems[1447]: Found vda7 Sep 5 00:05:02.254502 extend-filesystems[1447]: Found vda9 Sep 5 00:05:02.254502 extend-filesystems[1447]: Checking size of /dev/vda9 Sep 5 00:05:02.272139 update_engine[1455]: I20250905 00:05:02.252301 1455 main.cc:92] Flatcar Update Engine starting Sep 5 00:05:02.272139 update_engine[1455]: I20250905 00:05:02.255759 1455 update_check_scheduler.cc:74] Next update check in 10m33s Sep 5 00:05:02.251786 dbus-daemon[1445]: [system] SELinux support is enabled Sep 5 00:05:02.242614 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 00:05:02.242846 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 00:05:02.256308 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 00:05:02.264375 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 00:05:02.274530 jq[1465]: true Sep 5 00:05:02.277260 extend-filesystems[1447]: Resized partition /dev/vda9 Sep 5 00:05:02.281121 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (1390) Sep 5 00:05:02.287015 systemd[1]: Started update-engine.service - Update Engine. Sep 5 00:05:02.288908 tar[1463]: linux-amd64/helm Sep 5 00:05:02.299632 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Sep 5 00:05:02.308029 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 00:05:02.308651 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 00:05:02.308691 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 00:05:02.313277 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 00:05:02.313307 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 00:05:02.328621 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 00:05:02.338037 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 00:05:02.362655 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 00:05:02.362655 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 00:05:02.362655 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 00:05:02.373997 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Sep 5 00:05:02.364480 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 00:05:02.364724 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 00:05:02.365509 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Sep 5 00:05:02.365533 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 5 00:05:02.365860 systemd-logind[1453]: New seat seat0. Sep 5 00:05:02.373511 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 00:05:02.377958 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 00:05:02.440164 bash[1499]: Updated "/home/core/.ssh/authorized_keys" Sep 5 00:05:02.441584 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 00:05:02.446053 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 00:05:02.459048 locksmithd[1489]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 00:05:02.469290 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 00:05:02.478254 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 00:05:02.489298 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 00:05:02.489643 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 00:05:02.502396 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 00:05:02.565533 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 00:05:02.578598 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 00:05:02.581288 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 5 00:05:02.582756 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 00:05:02.732496 containerd[1469]: time="2025-09-05T00:05:02.730635495Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 5 00:05:02.821924 containerd[1469]: time="2025-09-05T00:05:02.821829462Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.824462 containerd[1469]: time="2025-09-05T00:05:02.824390123Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:02.824509 containerd[1469]: time="2025-09-05T00:05:02.824462569Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 5 00:05:02.824509 containerd[1469]: time="2025-09-05T00:05:02.824492457Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 5 00:05:02.824929 containerd[1469]: time="2025-09-05T00:05:02.824886167Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 5 00:05:02.824929 containerd[1469]: time="2025-09-05T00:05:02.824923931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.825095 containerd[1469]: time="2025-09-05T00:05:02.825066976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:02.825131 containerd[1469]: time="2025-09-05T00:05:02.825093406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.825481 containerd[1469]: time="2025-09-05T00:05:02.825431717Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:02.825481 containerd[1469]: time="2025-09-05T00:05:02.825461407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.825583 containerd[1469]: time="2025-09-05T00:05:02.825502671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:02.825583 containerd[1469]: time="2025-09-05T00:05:02.825519918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.825703 containerd[1469]: time="2025-09-05T00:05:02.825675707Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.826151 containerd[1469]: time="2025-09-05T00:05:02.826107297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 5 00:05:02.826408 containerd[1469]: time="2025-09-05T00:05:02.826296338Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 5 00:05:02.826408 containerd[1469]: time="2025-09-05T00:05:02.826324063Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 5 00:05:02.826517 containerd[1469]: time="2025-09-05T00:05:02.826477199Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 5 00:05:02.826584 containerd[1469]: time="2025-09-05T00:05:02.826559915Z" level=info msg="metadata content store policy set" policy=shared Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.844277770Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.844401791Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.844431292Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.844453533Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.844476985Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.844853344Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.845311666Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.845484661Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.845509910Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.845531053Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.845566238Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.845586504Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.845603595Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.847054 containerd[1469]: time="2025-09-05T00:05:02.845642383Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845668092Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845687386Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845706901Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845724148Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845752750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845773006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845791695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845809727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845828614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845867015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845890185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845909449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845928243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.847697 containerd[1469]: time="2025-09-05T00:05:02.845948613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.845968033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.845985051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846008002Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846047198Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846081159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846100328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846135648Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846213036Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846243258Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846259451Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846276928Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846291950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846311526Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 5 00:05:02.848235 containerd[1469]: time="2025-09-05T00:05:02.846330623Z" level=info msg="NRI interface is disabled by configuration." Sep 5 00:05:02.848591 containerd[1469]: time="2025-09-05T00:05:02.846343932Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 5 00:05:02.848634 containerd[1469]: time="2025-09-05T00:05:02.846792006Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 5 00:05:02.848634 containerd[1469]: time="2025-09-05T00:05:02.846881773Z" level=info msg="Connect containerd service" Sep 5 00:05:02.848634 containerd[1469]: time="2025-09-05T00:05:02.846944797Z" level=info msg="using legacy CRI server" Sep 5 00:05:02.848634 containerd[1469]: time="2025-09-05T00:05:02.846956654Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 00:05:02.848634 containerd[1469]: time="2025-09-05T00:05:02.848147039Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 5 00:05:02.849793 containerd[1469]: time="2025-09-05T00:05:02.849730633Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 00:05:02.850084 containerd[1469]: time="2025-09-05T00:05:02.849950189Z" level=info msg="Start subscribing containerd event" Sep 5 00:05:02.850213 containerd[1469]: time="2025-09-05T00:05:02.850186646Z" level=info msg="Start recovering state" Sep 5 00:05:02.850325 containerd[1469]: time="2025-09-05T00:05:02.850301684Z" level=info msg="Start event monitor" Sep 5 00:05:02.850369 containerd[1469]: time="2025-09-05T00:05:02.850341736Z" level=info msg="Start snapshots syncer" Sep 5 00:05:02.850369 containerd[1469]: time="2025-09-05T00:05:02.850361438Z" level=info msg="Start cni network conf syncer for default" Sep 5 00:05:02.850420 containerd[1469]: time="2025-09-05T00:05:02.850372700Z" level=info msg="Start streaming server" Sep 5 00:05:02.852386 containerd[1469]: time="2025-09-05T00:05:02.850846294Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 00:05:02.852386 containerd[1469]: time="2025-09-05T00:05:02.850930305Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 00:05:02.852386 containerd[1469]: time="2025-09-05T00:05:02.851425483Z" level=info msg="containerd successfully booted in 0.122809s" Sep 5 00:05:02.851235 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 00:05:03.093315 tar[1463]: linux-amd64/LICENSE Sep 5 00:05:03.093452 tar[1463]: linux-amd64/README.md Sep 5 00:05:03.118401 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 00:05:03.513915 systemd-networkd[1402]: eth0: Gained IPv6LL Sep 5 00:05:03.522954 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 00:05:03.525696 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 00:05:03.537703 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 00:05:03.544673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:03.552177 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 00:05:03.582662 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 00:05:03.583682 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 00:05:03.588881 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 00:05:03.600887 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 00:05:04.403238 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 00:05:04.473496 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:43800.service - OpenSSH per-connection server daemon (10.0.0.1:43800). Sep 5 00:05:04.586810 sshd[1554]: Accepted publickey for core from 10.0.0.1 port 43800 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:04.598042 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:04.614677 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 00:05:04.660543 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 00:05:04.665414 systemd-logind[1453]: New session 1 of user core. Sep 5 00:05:04.695548 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 00:05:04.713217 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 00:05:04.734499 (systemd)[1558]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 00:05:04.982931 systemd[1558]: Queued start job for default target default.target. Sep 5 00:05:05.041283 systemd[1558]: Created slice app.slice - User Application Slice. Sep 5 00:05:05.041325 systemd[1558]: Reached target paths.target - Paths. Sep 5 00:05:05.041346 systemd[1558]: Reached target timers.target - Timers. Sep 5 00:05:05.044000 systemd[1558]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 00:05:05.068530 systemd[1558]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 00:05:05.068735 systemd[1558]: Reached target sockets.target - Sockets. Sep 5 00:05:05.068761 systemd[1558]: Reached target basic.target - Basic System. Sep 5 00:05:05.068836 systemd[1558]: Reached target default.target - Main User Target. Sep 5 00:05:05.068900 systemd[1558]: Startup finished in 320ms. Sep 5 00:05:05.069165 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 00:05:05.087464 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 00:05:05.168116 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:43808.service - OpenSSH per-connection server daemon (10.0.0.1:43808). Sep 5 00:05:05.314299 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 43808 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:05.317893 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:05.331815 systemd-logind[1453]: New session 2 of user core. Sep 5 00:05:05.342556 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 00:05:05.424071 sshd[1569]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:05.435410 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:43808.service: Deactivated successfully. Sep 5 00:05:05.442629 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 00:05:05.446548 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Sep 5 00:05:05.455604 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:43812.service - OpenSSH per-connection server daemon (10.0.0.1:43812). Sep 5 00:05:05.459714 systemd-logind[1453]: Removed session 2. Sep 5 00:05:05.693636 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 43812 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:05.696215 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:05.702022 systemd-logind[1453]: New session 3 of user core. Sep 5 00:05:05.709280 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 00:05:05.771044 sshd[1576]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:05.777086 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:43812.service: Deactivated successfully. Sep 5 00:05:05.780321 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 00:05:05.781917 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Sep 5 00:05:05.799641 systemd-logind[1453]: Removed session 3. Sep 5 00:05:05.820065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:05.822440 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 00:05:05.824860 systemd[1]: Startup finished in 1.008s (kernel) + 8.668s (initrd) + 6.488s (userspace) = 16.165s. Sep 5 00:05:05.833760 (kubelet)[1587]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:05:06.765347 kubelet[1587]: E0905 00:05:06.765216 1587 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:05:06.772048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:05:06.772406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:05:06.772900 systemd[1]: kubelet.service: Consumed 2.590s CPU time. Sep 5 00:05:15.935467 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:34124.service - OpenSSH per-connection server daemon (10.0.0.1:34124). Sep 5 00:05:15.971277 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 34124 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:15.973161 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:15.977417 systemd-logind[1453]: New session 4 of user core. Sep 5 00:05:15.987144 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 00:05:16.043371 sshd[1600]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:16.058848 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:34124.service: Deactivated successfully. Sep 5 00:05:16.061184 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 00:05:16.063081 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Sep 5 00:05:16.072288 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:34132.service - OpenSSH per-connection server daemon (10.0.0.1:34132). Sep 5 00:05:16.073288 systemd-logind[1453]: Removed session 4. Sep 5 00:05:16.100977 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 34132 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:16.102670 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:16.106958 systemd-logind[1453]: New session 5 of user core. Sep 5 00:05:16.117196 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 00:05:16.169190 sshd[1607]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:16.179086 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:34132.service: Deactivated successfully. Sep 5 00:05:16.181112 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 00:05:16.182813 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Sep 5 00:05:16.198278 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:34138.service - OpenSSH per-connection server daemon (10.0.0.1:34138). Sep 5 00:05:16.199292 systemd-logind[1453]: Removed session 5. Sep 5 00:05:16.226198 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 34138 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:16.227773 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:16.231585 systemd-logind[1453]: New session 6 of user core. Sep 5 00:05:16.249120 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 00:05:16.306832 sshd[1614]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:16.316840 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:34138.service: Deactivated successfully. Sep 5 00:05:16.318572 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 00:05:16.320491 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Sep 5 00:05:16.321879 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:34146.service - OpenSSH per-connection server daemon (10.0.0.1:34146). Sep 5 00:05:16.322680 systemd-logind[1453]: Removed session 6. Sep 5 00:05:16.357228 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 34146 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:16.358787 sshd[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:16.362641 systemd-logind[1453]: New session 7 of user core. Sep 5 00:05:16.372101 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 00:05:16.432665 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 00:05:16.433089 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:16.452861 sudo[1624]: pam_unix(sudo:session): session closed for user root Sep 5 00:05:16.454997 sshd[1621]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:16.462959 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:34146.service: Deactivated successfully. Sep 5 00:05:16.464955 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 00:05:16.466669 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Sep 5 00:05:16.481594 systemd[1]: Started sshd@7-10.0.0.14:22-10.0.0.1:34150.service - OpenSSH per-connection server daemon (10.0.0.1:34150). Sep 5 00:05:16.483072 systemd-logind[1453]: Removed session 7. Sep 5 00:05:16.511398 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 34150 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:16.513164 sshd[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:16.517266 systemd-logind[1453]: New session 8 of user core. Sep 5 00:05:16.528127 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 00:05:16.584576 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 00:05:16.584940 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:16.589471 sudo[1633]: pam_unix(sudo:session): session closed for user root Sep 5 00:05:16.596493 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 5 00:05:16.596860 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:16.618229 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 5 00:05:16.620257 auditctl[1636]: No rules Sep 5 00:05:16.621778 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 00:05:16.622086 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 5 00:05:16.624087 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 5 00:05:16.658651 augenrules[1654]: No rules Sep 5 00:05:16.660822 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 5 00:05:16.662307 sudo[1632]: pam_unix(sudo:session): session closed for user root Sep 5 00:05:16.664183 sshd[1629]: pam_unix(sshd:session): session closed for user core Sep 5 00:05:16.681647 systemd[1]: sshd@7-10.0.0.14:22-10.0.0.1:34150.service: Deactivated successfully. Sep 5 00:05:16.683798 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 00:05:16.685925 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Sep 5 00:05:16.699305 systemd[1]: Started sshd@8-10.0.0.14:22-10.0.0.1:34162.service - OpenSSH per-connection server daemon (10.0.0.1:34162). Sep 5 00:05:16.700456 systemd-logind[1453]: Removed session 8. Sep 5 00:05:16.729162 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 34162 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:05:16.730814 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:05:16.735083 systemd-logind[1453]: New session 9 of user core. Sep 5 00:05:16.747115 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 00:05:16.802637 sudo[1665]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 00:05:16.803019 sudo[1665]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 00:05:16.803927 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 00:05:16.810242 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:17.040460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:17.045623 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:05:17.147548 kubelet[1685]: E0905 00:05:17.147473 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:05:17.155077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:05:17.155345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:05:18.280616 (dockerd)[1700]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 00:05:18.281612 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 00:05:19.762873 dockerd[1700]: time="2025-09-05T00:05:19.762787487Z" level=info msg="Starting up" Sep 5 00:05:20.279743 dockerd[1700]: time="2025-09-05T00:05:20.279162931Z" level=info msg="Loading containers: start." Sep 5 00:05:20.723862 kernel: Initializing XFRM netlink socket Sep 5 00:05:20.899530 systemd-networkd[1402]: docker0: Link UP Sep 5 00:05:20.942139 dockerd[1700]: time="2025-09-05T00:05:20.942062154Z" level=info msg="Loading containers: done." Sep 5 00:05:20.976368 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3805304084-merged.mount: Deactivated successfully. Sep 5 00:05:20.982232 dockerd[1700]: time="2025-09-05T00:05:20.982139525Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 00:05:20.982436 dockerd[1700]: time="2025-09-05T00:05:20.982318156Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 5 00:05:20.982529 dockerd[1700]: time="2025-09-05T00:05:20.982503001Z" level=info msg="Daemon has completed initialization" Sep 5 00:05:21.043227 dockerd[1700]: time="2025-09-05T00:05:21.043088963Z" level=info msg="API listen on /run/docker.sock" Sep 5 00:05:21.043658 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 00:05:21.890318 containerd[1469]: time="2025-09-05T00:05:21.890219348Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 5 00:05:22.854576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1117409579.mount: Deactivated successfully. Sep 5 00:05:24.374879 containerd[1469]: time="2025-09-05T00:05:24.374797787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:24.375749 containerd[1469]: time="2025-09-05T00:05:24.375645627Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079631" Sep 5 00:05:24.379763 containerd[1469]: time="2025-09-05T00:05:24.379662692Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:24.383183 containerd[1469]: time="2025-09-05T00:05:24.383127347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:24.384763 containerd[1469]: time="2025-09-05T00:05:24.384712512Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 2.494392973s" Sep 5 00:05:24.384818 containerd[1469]: time="2025-09-05T00:05:24.384782812Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 5 00:05:24.385923 containerd[1469]: time="2025-09-05T00:05:24.385875353Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 5 00:05:26.065241 containerd[1469]: time="2025-09-05T00:05:26.065134974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:26.098542 containerd[1469]: time="2025-09-05T00:05:26.098172582Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714681" Sep 5 00:05:26.100796 containerd[1469]: time="2025-09-05T00:05:26.100705455Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:26.109719 containerd[1469]: time="2025-09-05T00:05:26.109625311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:26.112899 containerd[1469]: time="2025-09-05T00:05:26.111959658Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 1.725984917s" Sep 5 00:05:26.113179 containerd[1469]: time="2025-09-05T00:05:26.112956830Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 5 00:05:26.113829 containerd[1469]: time="2025-09-05T00:05:26.113781690Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 5 00:05:27.211868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 00:05:27.225235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:27.408332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:27.413250 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:05:27.984424 kubelet[1917]: E0905 00:05:27.984341 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:05:27.989360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:05:27.989598 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:05:28.387644 containerd[1469]: time="2025-09-05T00:05:28.387491613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:28.388406 containerd[1469]: time="2025-09-05T00:05:28.388359338Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782427" Sep 5 00:05:28.389611 containerd[1469]: time="2025-09-05T00:05:28.389570590Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:28.394828 containerd[1469]: time="2025-09-05T00:05:28.394794665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:28.396430 containerd[1469]: time="2025-09-05T00:05:28.396367455Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 2.282541109s" Sep 5 00:05:28.396506 containerd[1469]: time="2025-09-05T00:05:28.396430969Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 5 00:05:28.397233 containerd[1469]: time="2025-09-05T00:05:28.397129662Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 5 00:05:29.505108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount729029904.mount: Deactivated successfully. Sep 5 00:05:30.398170 containerd[1469]: time="2025-09-05T00:05:30.398091178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:30.398867 containerd[1469]: time="2025-09-05T00:05:30.398825269Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384255" Sep 5 00:05:30.400040 containerd[1469]: time="2025-09-05T00:05:30.400004625Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:30.401787 containerd[1469]: time="2025-09-05T00:05:30.401727762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:30.402661 containerd[1469]: time="2025-09-05T00:05:30.402619056Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.005370727s" Sep 5 00:05:30.402661 containerd[1469]: time="2025-09-05T00:05:30.402650757Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 5 00:05:30.403388 containerd[1469]: time="2025-09-05T00:05:30.403202731Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 5 00:05:31.284998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2535574743.mount: Deactivated successfully. Sep 5 00:05:32.180370 containerd[1469]: time="2025-09-05T00:05:32.180301978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:32.181417 containerd[1469]: time="2025-09-05T00:05:32.181345043Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 5 00:05:32.182615 containerd[1469]: time="2025-09-05T00:05:32.182558480Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:32.185747 containerd[1469]: time="2025-09-05T00:05:32.185709223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:32.186869 containerd[1469]: time="2025-09-05T00:05:32.186833373Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.783592333s" Sep 5 00:05:32.186923 containerd[1469]: time="2025-09-05T00:05:32.186868697Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 5 00:05:32.187355 containerd[1469]: time="2025-09-05T00:05:32.187333699Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 00:05:32.880417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3979133400.mount: Deactivated successfully. Sep 5 00:05:32.888454 containerd[1469]: time="2025-09-05T00:05:32.888388188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:32.889143 containerd[1469]: time="2025-09-05T00:05:32.889083651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 5 00:05:32.890333 containerd[1469]: time="2025-09-05T00:05:32.890298060Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:32.894227 containerd[1469]: time="2025-09-05T00:05:32.894134240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:32.894856 containerd[1469]: time="2025-09-05T00:05:32.894827378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 707.378213ms" Sep 5 00:05:32.894904 containerd[1469]: time="2025-09-05T00:05:32.894858982Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 5 00:05:32.895852 containerd[1469]: time="2025-09-05T00:05:32.895808178Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 5 00:05:34.027812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1342414230.mount: Deactivated successfully. Sep 5 00:05:35.840602 containerd[1469]: time="2025-09-05T00:05:35.840506132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:35.841451 containerd[1469]: time="2025-09-05T00:05:35.841377956Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 5 00:05:35.842790 containerd[1469]: time="2025-09-05T00:05:35.842747864Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:35.849474 containerd[1469]: time="2025-09-05T00:05:35.849386705Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.953532313s" Sep 5 00:05:35.849474 containerd[1469]: time="2025-09-05T00:05:35.849431445Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 5 00:05:35.850206 containerd[1469]: time="2025-09-05T00:05:35.850162269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:38.211914 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 5 00:05:38.221180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:38.394868 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:38.400341 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 00:05:38.449647 kubelet[2075]: E0905 00:05:38.449548 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 00:05:38.454154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 00:05:38.454396 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 00:05:38.858779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:38.875312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:38.898261 systemd[1]: Reloading requested from client PID 2090 ('systemctl') (unit session-9.scope)... Sep 5 00:05:38.898288 systemd[1]: Reloading... Sep 5 00:05:39.001027 zram_generator::config[2132]: No configuration found. Sep 5 00:05:39.485444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:05:39.567444 systemd[1]: Reloading finished in 668 ms. Sep 5 00:05:39.621411 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 00:05:39.621514 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 00:05:39.621799 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:39.624927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:39.799724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:39.804600 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:05:39.848090 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:05:39.848090 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:05:39.848090 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:05:39.848528 kubelet[2178]: I0905 00:05:39.848169 2178 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:05:40.403918 kubelet[2178]: I0905 00:05:40.403857 2178 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:05:40.403918 kubelet[2178]: I0905 00:05:40.403898 2178 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:05:40.404192 kubelet[2178]: I0905 00:05:40.404169 2178 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:05:40.426223 kubelet[2178]: I0905 00:05:40.426169 2178 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:05:40.426378 kubelet[2178]: E0905 00:05:40.426316 2178 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:40.435306 kubelet[2178]: E0905 00:05:40.435191 2178 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:05:40.435306 kubelet[2178]: I0905 00:05:40.435299 2178 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:05:40.442400 kubelet[2178]: I0905 00:05:40.442368 2178 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:05:40.447162 kubelet[2178]: I0905 00:05:40.447127 2178 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:05:40.447397 kubelet[2178]: I0905 00:05:40.447345 2178 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:05:40.447635 kubelet[2178]: I0905 00:05:40.447387 2178 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:05:40.447770 kubelet[2178]: I0905 00:05:40.447641 2178 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:05:40.447770 kubelet[2178]: I0905 00:05:40.447653 2178 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:05:40.447846 kubelet[2178]: I0905 00:05:40.447826 2178 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:05:40.450859 kubelet[2178]: I0905 00:05:40.450829 2178 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:05:40.450859 kubelet[2178]: I0905 00:05:40.450854 2178 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:05:40.450930 kubelet[2178]: I0905 00:05:40.450896 2178 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:05:40.450930 kubelet[2178]: I0905 00:05:40.450920 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:05:40.453169 kubelet[2178]: W0905 00:05:40.453044 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:05:40.453169 kubelet[2178]: E0905 00:05:40.453115 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:40.454665 kubelet[2178]: I0905 00:05:40.454137 2178 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:05:40.454665 kubelet[2178]: W0905 00:05:40.454481 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:05:40.454665 kubelet[2178]: I0905 00:05:40.454541 2178 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:05:40.454665 kubelet[2178]: E0905 00:05:40.454536 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:40.455708 kubelet[2178]: W0905 00:05:40.455677 2178 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 00:05:40.457971 kubelet[2178]: I0905 00:05:40.457937 2178 server.go:1274] "Started kubelet" Sep 5 00:05:40.458952 kubelet[2178]: I0905 00:05:40.458129 2178 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:05:40.458952 kubelet[2178]: I0905 00:05:40.458143 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:05:40.458952 kubelet[2178]: I0905 00:05:40.458545 2178 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:05:40.459331 kubelet[2178]: I0905 00:05:40.459315 2178 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:05:40.460013 kubelet[2178]: I0905 00:05:40.459570 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:05:40.460013 kubelet[2178]: I0905 00:05:40.459725 2178 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:05:40.462415 kubelet[2178]: E0905 00:05:40.462383 2178 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:05:40.462415 kubelet[2178]: I0905 00:05:40.462422 2178 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:05:40.462641 kubelet[2178]: I0905 00:05:40.462612 2178 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:05:40.462710 kubelet[2178]: I0905 00:05:40.462696 2178 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:05:40.462974 kubelet[2178]: W0905 00:05:40.462941 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:05:40.463021 kubelet[2178]: E0905 00:05:40.463007 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:40.463263 kubelet[2178]: E0905 00:05:40.463229 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" Sep 5 00:05:40.464748 kubelet[2178]: I0905 00:05:40.464142 2178 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:05:40.464748 kubelet[2178]: I0905 00:05:40.464253 2178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:05:40.465352 kubelet[2178]: I0905 00:05:40.465205 2178 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:05:40.465767 kubelet[2178]: E0905 00:05:40.464078 2178 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18623a26386f6621 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 00:05:40.457907745 +0000 UTC m=+0.646309122,LastTimestamp:2025-09-05 00:05:40.457907745 +0000 UTC m=+0.646309122,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 00:05:40.465897 kubelet[2178]: E0905 00:05:40.465881 2178 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:05:40.480628 kubelet[2178]: I0905 00:05:40.480561 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:05:40.482144 kubelet[2178]: I0905 00:05:40.482113 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:05:40.482144 kubelet[2178]: I0905 00:05:40.482145 2178 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:05:40.482216 kubelet[2178]: I0905 00:05:40.482176 2178 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:05:40.482237 kubelet[2178]: E0905 00:05:40.482220 2178 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:05:40.484157 kubelet[2178]: W0905 00:05:40.483785 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:05:40.484157 kubelet[2178]: E0905 00:05:40.483848 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:40.484706 kubelet[2178]: I0905 00:05:40.484444 2178 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:05:40.484706 kubelet[2178]: I0905 00:05:40.484458 2178 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:05:40.484706 kubelet[2178]: I0905 00:05:40.484474 2178 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:05:40.563558 kubelet[2178]: E0905 00:05:40.563470 2178 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:05:40.582858 kubelet[2178]: E0905 00:05:40.582797 2178 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:05:40.664481 kubelet[2178]: E0905 00:05:40.664305 2178 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:05:40.664780 kubelet[2178]: E0905 00:05:40.664737 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" Sep 5 00:05:40.762271 kubelet[2178]: I0905 00:05:40.762188 2178 policy_none.go:49] "None policy: Start" Sep 5 00:05:40.763367 kubelet[2178]: I0905 00:05:40.763323 2178 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:05:40.763367 kubelet[2178]: I0905 00:05:40.763353 2178 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:05:40.764610 kubelet[2178]: E0905 00:05:40.764588 2178 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:05:40.772132 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 00:05:40.783257 kubelet[2178]: E0905 00:05:40.783212 2178 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 00:05:40.786442 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 00:05:40.790382 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 00:05:40.808440 kubelet[2178]: I0905 00:05:40.808375 2178 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:05:40.809129 kubelet[2178]: I0905 00:05:40.808650 2178 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:05:40.809129 kubelet[2178]: I0905 00:05:40.808664 2178 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:05:40.809129 kubelet[2178]: I0905 00:05:40.808918 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:05:40.810254 kubelet[2178]: E0905 00:05:40.810228 2178 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 00:05:40.911124 kubelet[2178]: I0905 00:05:40.911081 2178 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:40.911655 kubelet[2178]: E0905 00:05:40.911611 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 5 00:05:41.066149 kubelet[2178]: E0905 00:05:41.066088 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" Sep 5 00:05:41.113952 kubelet[2178]: I0905 00:05:41.113895 2178 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:41.114402 kubelet[2178]: E0905 00:05:41.114362 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 5 00:05:41.194081 systemd[1]: Created slice kubepods-burstable-pod96f1618a837ec29921cf8b1f8a8f8aec.slice - libcontainer container kubepods-burstable-pod96f1618a837ec29921cf8b1f8a8f8aec.slice. Sep 5 00:05:41.204853 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 5 00:05:41.209266 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 5 00:05:41.266308 kubelet[2178]: I0905 00:05:41.266231 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96f1618a837ec29921cf8b1f8a8f8aec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"96f1618a837ec29921cf8b1f8a8f8aec\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:41.266308 kubelet[2178]: I0905 00:05:41.266282 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:41.266308 kubelet[2178]: I0905 00:05:41.266320 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:05:41.266516 kubelet[2178]: I0905 00:05:41.266344 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96f1618a837ec29921cf8b1f8a8f8aec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"96f1618a837ec29921cf8b1f8a8f8aec\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:41.266516 kubelet[2178]: I0905 00:05:41.266365 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96f1618a837ec29921cf8b1f8a8f8aec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"96f1618a837ec29921cf8b1f8a8f8aec\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:41.266516 kubelet[2178]: I0905 00:05:41.266388 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:41.266516 kubelet[2178]: I0905 00:05:41.266408 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:41.266516 kubelet[2178]: I0905 00:05:41.266426 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:41.266727 kubelet[2178]: I0905 00:05:41.266446 2178 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:41.452076 kubelet[2178]: W0905 00:05:41.451859 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:05:41.452076 kubelet[2178]: E0905 00:05:41.451947 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:41.504183 kubelet[2178]: E0905 00:05:41.503520 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:41.504702 containerd[1469]: time="2025-09-05T00:05:41.504651739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:96f1618a837ec29921cf8b1f8a8f8aec,Namespace:kube-system,Attempt:0,}" Sep 5 00:05:41.508030 kubelet[2178]: E0905 00:05:41.507970 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:41.508691 containerd[1469]: time="2025-09-05T00:05:41.508648546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 5 00:05:41.511883 kubelet[2178]: E0905 00:05:41.511838 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:41.512229 containerd[1469]: time="2025-09-05T00:05:41.512190120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 5 00:05:41.516532 kubelet[2178]: I0905 00:05:41.516498 2178 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:41.517053 kubelet[2178]: E0905 00:05:41.516991 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 5 00:05:41.866924 kubelet[2178]: E0905 00:05:41.866827 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="1.6s" Sep 5 00:05:41.986054 kubelet[2178]: W0905 00:05:41.985963 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:05:41.986054 kubelet[2178]: E0905 00:05:41.986062 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:42.024249 kubelet[2178]: W0905 00:05:42.024164 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:05:42.024249 kubelet[2178]: E0905 00:05:42.024249 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:42.037176 kubelet[2178]: W0905 00:05:42.037100 2178 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Sep 5 00:05:42.037176 kubelet[2178]: E0905 00:05:42.037174 2178 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:42.319334 kubelet[2178]: I0905 00:05:42.319301 2178 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:42.319677 kubelet[2178]: E0905 00:05:42.319646 2178 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Sep 5 00:05:42.561441 kubelet[2178]: E0905 00:05:42.561379 2178 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Sep 5 00:05:42.918262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3283098007.mount: Deactivated successfully. Sep 5 00:05:42.928189 containerd[1469]: time="2025-09-05T00:05:42.928123268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:05:42.929272 containerd[1469]: time="2025-09-05T00:05:42.929225272Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:05:42.930004 containerd[1469]: time="2025-09-05T00:05:42.929961956Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:05:42.930926 containerd[1469]: time="2025-09-05T00:05:42.930850930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:05:42.931716 containerd[1469]: time="2025-09-05T00:05:42.931680993Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 5 00:05:42.932533 containerd[1469]: time="2025-09-05T00:05:42.932511585Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 5 00:05:42.933337 containerd[1469]: time="2025-09-05T00:05:42.933299884Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:05:42.939022 containerd[1469]: time="2025-09-05T00:05:42.937108370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 00:05:42.939419 containerd[1469]: time="2025-09-05T00:05:42.939365496Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.427066058s" Sep 5 00:05:42.941349 containerd[1469]: time="2025-09-05T00:05:42.941313280Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.436562838s" Sep 5 00:05:42.942280 containerd[1469]: time="2025-09-05T00:05:42.942244839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.433498411s" Sep 5 00:05:43.113778 containerd[1469]: time="2025-09-05T00:05:43.113654042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:05:43.113778 containerd[1469]: time="2025-09-05T00:05:43.113713575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:05:43.113778 containerd[1469]: time="2025-09-05T00:05:43.113728431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.114994 containerd[1469]: time="2025-09-05T00:05:43.114831074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:05:43.114994 containerd[1469]: time="2025-09-05T00:05:43.114915608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:05:43.114994 containerd[1469]: time="2025-09-05T00:05:43.114943856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.115208 containerd[1469]: time="2025-09-05T00:05:43.115136931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.116487 containerd[1469]: time="2025-09-05T00:05:43.115548972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.144746 containerd[1469]: time="2025-09-05T00:05:43.144603751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:05:43.144746 containerd[1469]: time="2025-09-05T00:05:43.144690549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:05:43.144746 containerd[1469]: time="2025-09-05T00:05:43.144707590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.146066 containerd[1469]: time="2025-09-05T00:05:43.145929010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:43.167240 systemd[1]: Started cri-containerd-152ee37d90f4c10744745f31d7e4d738c586e874d53038b41fc4f4aaacf9e1eb.scope - libcontainer container 152ee37d90f4c10744745f31d7e4d738c586e874d53038b41fc4f4aaacf9e1eb. Sep 5 00:05:43.171663 systemd[1]: Started cri-containerd-0687504348bbcdb11be8eca17d0d3396be110de5365bf04d8ac12b25f27c5b72.scope - libcontainer container 0687504348bbcdb11be8eca17d0d3396be110de5365bf04d8ac12b25f27c5b72. Sep 5 00:05:43.180250 systemd[1]: Started cri-containerd-e1c6b77d60d94b00ee767b6b0295a4f2dbc583b7ee2cb6ffd6edf1ffda1a1f28.scope - libcontainer container e1c6b77d60d94b00ee767b6b0295a4f2dbc583b7ee2cb6ffd6edf1ffda1a1f28. Sep 5 00:05:43.239274 containerd[1469]: time="2025-09-05T00:05:43.239221264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"152ee37d90f4c10744745f31d7e4d738c586e874d53038b41fc4f4aaacf9e1eb\"" Sep 5 00:05:43.242652 kubelet[2178]: E0905 00:05:43.242620 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:43.252119 containerd[1469]: time="2025-09-05T00:05:43.251751246Z" level=info msg="CreateContainer within sandbox \"152ee37d90f4c10744745f31d7e4d738c586e874d53038b41fc4f4aaacf9e1eb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 00:05:43.252119 containerd[1469]: time="2025-09-05T00:05:43.251852860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:96f1618a837ec29921cf8b1f8a8f8aec,Namespace:kube-system,Attempt:0,} returns sandbox id \"0687504348bbcdb11be8eca17d0d3396be110de5365bf04d8ac12b25f27c5b72\"" Sep 5 00:05:43.253071 kubelet[2178]: E0905 00:05:43.253033 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:43.255785 containerd[1469]: time="2025-09-05T00:05:43.255739584Z" level=info msg="CreateContainer within sandbox \"0687504348bbcdb11be8eca17d0d3396be110de5365bf04d8ac12b25f27c5b72\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 00:05:43.267582 containerd[1469]: time="2025-09-05T00:05:43.267505437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1c6b77d60d94b00ee767b6b0295a4f2dbc583b7ee2cb6ffd6edf1ffda1a1f28\"" Sep 5 00:05:43.268506 kubelet[2178]: E0905 00:05:43.268470 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:43.270824 containerd[1469]: time="2025-09-05T00:05:43.270782344Z" level=info msg="CreateContainer within sandbox \"e1c6b77d60d94b00ee767b6b0295a4f2dbc583b7ee2cb6ffd6edf1ffda1a1f28\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 00:05:43.278114 containerd[1469]: time="2025-09-05T00:05:43.278054875Z" level=info msg="CreateContainer within sandbox \"152ee37d90f4c10744745f31d7e4d738c586e874d53038b41fc4f4aaacf9e1eb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"61786720db16b4c8cb0c793e26e8e1c17a77ed10f64c12f6c35d03f9a42082b1\"" Sep 5 00:05:43.278767 containerd[1469]: time="2025-09-05T00:05:43.278717749Z" level=info msg="StartContainer for \"61786720db16b4c8cb0c793e26e8e1c17a77ed10f64c12f6c35d03f9a42082b1\"" Sep 5 00:05:43.290355 containerd[1469]: time="2025-09-05T00:05:43.290274862Z" level=info msg="CreateContainer within sandbox \"0687504348bbcdb11be8eca17d0d3396be110de5365bf04d8ac12b25f27c5b72\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dfe6b06f85fc40c58858b9da9369a7e84510e487267b1e3422973fdae8c1ca8e\"" Sep 5 00:05:43.290993 containerd[1469]: time="2025-09-05T00:05:43.290949725Z" level=info msg="StartContainer for \"dfe6b06f85fc40c58858b9da9369a7e84510e487267b1e3422973fdae8c1ca8e\"" Sep 5 00:05:43.294065 containerd[1469]: time="2025-09-05T00:05:43.294025808Z" level=info msg="CreateContainer within sandbox \"e1c6b77d60d94b00ee767b6b0295a4f2dbc583b7ee2cb6ffd6edf1ffda1a1f28\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d4afe1d13184d56c42227cc104c7f51a459f5114157df34ab07e0cabd38aa40\"" Sep 5 00:05:43.295733 containerd[1469]: time="2025-09-05T00:05:43.294704762Z" level=info msg="StartContainer for \"4d4afe1d13184d56c42227cc104c7f51a459f5114157df34ab07e0cabd38aa40\"" Sep 5 00:05:43.313304 systemd[1]: Started cri-containerd-61786720db16b4c8cb0c793e26e8e1c17a77ed10f64c12f6c35d03f9a42082b1.scope - libcontainer container 61786720db16b4c8cb0c793e26e8e1c17a77ed10f64c12f6c35d03f9a42082b1. Sep 5 00:05:43.330179 systemd[1]: Started cri-containerd-4d4afe1d13184d56c42227cc104c7f51a459f5114157df34ab07e0cabd38aa40.scope - libcontainer container 4d4afe1d13184d56c42227cc104c7f51a459f5114157df34ab07e0cabd38aa40. Sep 5 00:05:43.331798 systemd[1]: Started cri-containerd-dfe6b06f85fc40c58858b9da9369a7e84510e487267b1e3422973fdae8c1ca8e.scope - libcontainer container dfe6b06f85fc40c58858b9da9369a7e84510e487267b1e3422973fdae8c1ca8e. Sep 5 00:05:43.385872 containerd[1469]: time="2025-09-05T00:05:43.385808358Z" level=info msg="StartContainer for \"61786720db16b4c8cb0c793e26e8e1c17a77ed10f64c12f6c35d03f9a42082b1\" returns successfully" Sep 5 00:05:43.388655 containerd[1469]: time="2025-09-05T00:05:43.388618530Z" level=info msg="StartContainer for \"4d4afe1d13184d56c42227cc104c7f51a459f5114157df34ab07e0cabd38aa40\" returns successfully" Sep 5 00:05:43.405003 containerd[1469]: time="2025-09-05T00:05:43.401921393Z" level=info msg="StartContainer for \"dfe6b06f85fc40c58858b9da9369a7e84510e487267b1e3422973fdae8c1ca8e\" returns successfully" Sep 5 00:05:43.493515 kubelet[2178]: E0905 00:05:43.493464 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:43.495789 kubelet[2178]: E0905 00:05:43.495758 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:43.498416 kubelet[2178]: E0905 00:05:43.498389 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:43.924756 kubelet[2178]: I0905 00:05:43.924582 2178 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:44.501763 kubelet[2178]: E0905 00:05:44.501715 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:45.399189 kubelet[2178]: E0905 00:05:45.399134 2178 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 00:05:45.454476 kubelet[2178]: I0905 00:05:45.454434 2178 apiserver.go:52] "Watching apiserver" Sep 5 00:05:45.463491 kubelet[2178]: I0905 00:05:45.463439 2178 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:05:45.479544 kubelet[2178]: I0905 00:05:45.479465 2178 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:05:45.869095 kubelet[2178]: E0905 00:05:45.869038 2178 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:45.869697 kubelet[2178]: E0905 00:05:45.869267 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:46.436726 kubelet[2178]: E0905 00:05:46.436518 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:46.502725 kubelet[2178]: E0905 00:05:46.502652 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:47.274178 systemd[1]: Reloading requested from client PID 2454 ('systemctl') (unit session-9.scope)... Sep 5 00:05:47.274195 systemd[1]: Reloading... Sep 5 00:05:47.363030 zram_generator::config[2496]: No configuration found. Sep 5 00:05:47.480908 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 5 00:05:47.574077 systemd[1]: Reloading finished in 299 ms. Sep 5 00:05:47.619663 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:47.643600 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 00:05:47.643921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:47.643994 systemd[1]: kubelet.service: Consumed 1.209s CPU time, 130.3M memory peak, 0B memory swap peak. Sep 5 00:05:47.653512 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 00:05:47.659615 update_engine[1455]: I20250905 00:05:47.659525 1455 update_attempter.cc:509] Updating boot flags... Sep 5 00:05:47.704033 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2540) Sep 5 00:05:47.745009 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2539) Sep 5 00:05:47.791067 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 33 scanned by (udev-worker) (2539) Sep 5 00:05:47.892329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 00:05:47.912614 (kubelet)[2553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 00:05:47.961997 kubelet[2553]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:05:47.961997 kubelet[2553]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 5 00:05:47.961997 kubelet[2553]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 00:05:47.962497 kubelet[2553]: I0905 00:05:47.962051 2553 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 00:05:47.968673 kubelet[2553]: I0905 00:05:47.968613 2553 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 5 00:05:47.968673 kubelet[2553]: I0905 00:05:47.968657 2553 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 00:05:47.969053 kubelet[2553]: I0905 00:05:47.968959 2553 server.go:934] "Client rotation is on, will bootstrap in background" Sep 5 00:05:47.970530 kubelet[2553]: I0905 00:05:47.970492 2553 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 5 00:05:47.972691 kubelet[2553]: I0905 00:05:47.972641 2553 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 00:05:47.978509 kubelet[2553]: E0905 00:05:47.978469 2553 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 5 00:05:47.978509 kubelet[2553]: I0905 00:05:47.978507 2553 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 5 00:05:47.984019 kubelet[2553]: I0905 00:05:47.983971 2553 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 00:05:47.984170 kubelet[2553]: I0905 00:05:47.984137 2553 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 5 00:05:47.984389 kubelet[2553]: I0905 00:05:47.984331 2553 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 00:05:47.984567 kubelet[2553]: I0905 00:05:47.984373 2553 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 00:05:47.984661 kubelet[2553]: I0905 00:05:47.984569 2553 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 00:05:47.984661 kubelet[2553]: I0905 00:05:47.984580 2553 container_manager_linux.go:300] "Creating device plugin manager" Sep 5 00:05:47.984661 kubelet[2553]: I0905 00:05:47.984621 2553 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:05:47.984768 kubelet[2553]: I0905 00:05:47.984758 2553 kubelet.go:408] "Attempting to sync node with API server" Sep 5 00:05:47.984800 kubelet[2553]: I0905 00:05:47.984776 2553 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 00:05:47.984873 kubelet[2553]: I0905 00:05:47.984850 2553 kubelet.go:314] "Adding apiserver pod source" Sep 5 00:05:47.984873 kubelet[2553]: I0905 00:05:47.984870 2553 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 00:05:47.989018 kubelet[2553]: I0905 00:05:47.986020 2553 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 5 00:05:47.989018 kubelet[2553]: I0905 00:05:47.986460 2553 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 5 00:05:47.989018 kubelet[2553]: I0905 00:05:47.987035 2553 server.go:1274] "Started kubelet" Sep 5 00:05:47.989018 kubelet[2553]: I0905 00:05:47.987183 2553 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 00:05:47.989018 kubelet[2553]: I0905 00:05:47.987382 2553 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 00:05:47.989018 kubelet[2553]: I0905 00:05:47.988475 2553 server.go:449] "Adding debug handlers to kubelet server" Sep 5 00:05:47.990126 kubelet[2553]: I0905 00:05:47.990097 2553 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 00:05:47.998064 kubelet[2553]: I0905 00:05:47.997244 2553 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 00:05:47.998626 kubelet[2553]: I0905 00:05:47.998592 2553 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 00:05:48.000108 kubelet[2553]: I0905 00:05:48.000086 2553 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 5 00:05:48.040297 kubelet[2553]: E0905 00:05:48.040213 2553 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 00:05:48.040950 kubelet[2553]: I0905 00:05:48.040931 2553 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 5 00:05:48.042057 kubelet[2553]: I0905 00:05:48.042042 2553 reconciler.go:26] "Reconciler: start to sync state" Sep 5 00:05:48.052425 kubelet[2553]: I0905 00:05:48.050612 2553 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 00:05:48.053492 kubelet[2553]: E0905 00:05:48.053448 2553 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 00:05:48.053492 kubelet[2553]: I0905 00:05:48.053492 2553 factory.go:221] Registration of the containerd container factory successfully Sep 5 00:05:48.053607 kubelet[2553]: I0905 00:05:48.053505 2553 factory.go:221] Registration of the systemd container factory successfully Sep 5 00:05:48.054025 kubelet[2553]: I0905 00:05:48.053965 2553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 5 00:05:48.056390 kubelet[2553]: I0905 00:05:48.056369 2553 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 5 00:05:48.056548 kubelet[2553]: I0905 00:05:48.056531 2553 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 5 00:05:48.056684 kubelet[2553]: I0905 00:05:48.056668 2553 kubelet.go:2321] "Starting kubelet main sync loop" Sep 5 00:05:48.056822 kubelet[2553]: E0905 00:05:48.056795 2553 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 00:05:48.099001 kubelet[2553]: I0905 00:05:48.098937 2553 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 5 00:05:48.099001 kubelet[2553]: I0905 00:05:48.098959 2553 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 5 00:05:48.099001 kubelet[2553]: I0905 00:05:48.098995 2553 state_mem.go:36] "Initialized new in-memory state store" Sep 5 00:05:48.099279 kubelet[2553]: I0905 00:05:48.099212 2553 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 00:05:48.099279 kubelet[2553]: I0905 00:05:48.099223 2553 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 00:05:48.099279 kubelet[2553]: I0905 00:05:48.099250 2553 policy_none.go:49] "None policy: Start" Sep 5 00:05:48.101212 kubelet[2553]: I0905 00:05:48.100134 2553 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 5 00:05:48.101212 kubelet[2553]: I0905 00:05:48.100176 2553 state_mem.go:35] "Initializing new in-memory state store" Sep 5 00:05:48.101212 kubelet[2553]: I0905 00:05:48.100395 2553 state_mem.go:75] "Updated machine memory state" Sep 5 00:05:48.104970 kubelet[2553]: I0905 00:05:48.104944 2553 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 5 00:05:48.105224 kubelet[2553]: I0905 00:05:48.105195 2553 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 00:05:48.105267 kubelet[2553]: I0905 00:05:48.105219 2553 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 00:05:48.105619 kubelet[2553]: I0905 00:05:48.105555 2553 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 00:05:48.163728 kubelet[2553]: E0905 00:05:48.163593 2553 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:48.211183 kubelet[2553]: I0905 00:05:48.211133 2553 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 5 00:05:48.219636 kubelet[2553]: I0905 00:05:48.219590 2553 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 5 00:05:48.219756 kubelet[2553]: I0905 00:05:48.219672 2553 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 5 00:05:48.243173 kubelet[2553]: I0905 00:05:48.243099 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/96f1618a837ec29921cf8b1f8a8f8aec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"96f1618a837ec29921cf8b1f8a8f8aec\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:48.243173 kubelet[2553]: I0905 00:05:48.243152 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/96f1618a837ec29921cf8b1f8a8f8aec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"96f1618a837ec29921cf8b1f8a8f8aec\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:48.243173 kubelet[2553]: I0905 00:05:48.243173 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:48.243173 kubelet[2553]: I0905 00:05:48.243190 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:48.243423 kubelet[2553]: I0905 00:05:48.243208 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 5 00:05:48.243423 kubelet[2553]: I0905 00:05:48.243227 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/96f1618a837ec29921cf8b1f8a8f8aec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"96f1618a837ec29921cf8b1f8a8f8aec\") " pod="kube-system/kube-apiserver-localhost" Sep 5 00:05:48.243423 kubelet[2553]: I0905 00:05:48.243243 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:48.243423 kubelet[2553]: I0905 00:05:48.243258 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:48.243423 kubelet[2553]: I0905 00:05:48.243273 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 00:05:48.463766 kubelet[2553]: E0905 00:05:48.463735 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:48.463917 kubelet[2553]: E0905 00:05:48.463735 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:48.463942 kubelet[2553]: E0905 00:05:48.463905 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:48.985557 kubelet[2553]: I0905 00:05:48.985510 2553 apiserver.go:52] "Watching apiserver" Sep 5 00:05:49.041418 kubelet[2553]: I0905 00:05:49.041382 2553 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 5 00:05:49.077231 kubelet[2553]: E0905 00:05:49.077154 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:49.078245 kubelet[2553]: E0905 00:05:49.077571 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:49.078245 kubelet[2553]: E0905 00:05:49.077800 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:49.116315 kubelet[2553]: I0905 00:05:49.116216 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.116188827 podStartE2EDuration="1.116188827s" podCreationTimestamp="2025-09-05 00:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:05:49.097358514 +0000 UTC m=+1.174320296" watchObservedRunningTime="2025-09-05 00:05:49.116188827 +0000 UTC m=+1.193150609" Sep 5 00:05:49.130400 kubelet[2553]: I0905 00:05:49.130314 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.130276597 podStartE2EDuration="3.130276597s" podCreationTimestamp="2025-09-05 00:05:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:05:49.116506527 +0000 UTC m=+1.193468309" watchObservedRunningTime="2025-09-05 00:05:49.130276597 +0000 UTC m=+1.207238379" Sep 5 00:05:49.130658 kubelet[2553]: I0905 00:05:49.130607 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.130596812 podStartE2EDuration="1.130596812s" podCreationTimestamp="2025-09-05 00:05:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:05:49.126572849 +0000 UTC m=+1.203534641" watchObservedRunningTime="2025-09-05 00:05:49.130596812 +0000 UTC m=+1.207558604" Sep 5 00:05:50.077939 kubelet[2553]: E0905 00:05:50.077900 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:50.783179 kubelet[2553]: E0905 00:05:50.783136 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:51.079431 kubelet[2553]: E0905 00:05:51.079295 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:53.861181 kubelet[2553]: I0905 00:05:53.861130 2553 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 00:05:53.861727 containerd[1469]: time="2025-09-05T00:05:53.861490428Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 00:05:53.862111 kubelet[2553]: I0905 00:05:53.861719 2553 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 00:05:54.303548 systemd[1]: Created slice kubepods-besteffort-podc68d0300_1552_4e00_a5c0_e0be1068f47c.slice - libcontainer container kubepods-besteffort-podc68d0300_1552_4e00_a5c0_e0be1068f47c.slice. Sep 5 00:05:54.375567 kubelet[2553]: I0905 00:05:54.375509 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c68d0300-1552-4e00-a5c0-e0be1068f47c-lib-modules\") pod \"kube-proxy-dgpvh\" (UID: \"c68d0300-1552-4e00-a5c0-e0be1068f47c\") " pod="kube-system/kube-proxy-dgpvh" Sep 5 00:05:54.375567 kubelet[2553]: I0905 00:05:54.375544 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c68d0300-1552-4e00-a5c0-e0be1068f47c-kube-proxy\") pod \"kube-proxy-dgpvh\" (UID: \"c68d0300-1552-4e00-a5c0-e0be1068f47c\") " pod="kube-system/kube-proxy-dgpvh" Sep 5 00:05:54.375567 kubelet[2553]: I0905 00:05:54.375561 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c68d0300-1552-4e00-a5c0-e0be1068f47c-xtables-lock\") pod \"kube-proxy-dgpvh\" (UID: \"c68d0300-1552-4e00-a5c0-e0be1068f47c\") " pod="kube-system/kube-proxy-dgpvh" Sep 5 00:05:54.375567 kubelet[2553]: I0905 00:05:54.375577 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wjw6\" (UniqueName: \"kubernetes.io/projected/c68d0300-1552-4e00-a5c0-e0be1068f47c-kube-api-access-5wjw6\") pod \"kube-proxy-dgpvh\" (UID: \"c68d0300-1552-4e00-a5c0-e0be1068f47c\") " pod="kube-system/kube-proxy-dgpvh" Sep 5 00:05:54.615367 kubelet[2553]: E0905 00:05:54.615204 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:54.615916 containerd[1469]: time="2025-09-05T00:05:54.615861376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgpvh,Uid:c68d0300-1552-4e00-a5c0-e0be1068f47c,Namespace:kube-system,Attempt:0,}" Sep 5 00:05:54.642761 containerd[1469]: time="2025-09-05T00:05:54.642618477Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:05:54.642761 containerd[1469]: time="2025-09-05T00:05:54.642677066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:05:54.642761 containerd[1469]: time="2025-09-05T00:05:54.642690966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:54.642970 containerd[1469]: time="2025-09-05T00:05:54.642800836Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:54.673139 systemd[1]: Started cri-containerd-7cf97841e0688292dc0837d1515383c13c111b77ddea77e5cd4525d809a8e394.scope - libcontainer container 7cf97841e0688292dc0837d1515383c13c111b77ddea77e5cd4525d809a8e394. Sep 5 00:05:54.720675 kubelet[2553]: W0905 00:05:54.720613 2553 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Sep 5 00:05:54.720675 kubelet[2553]: E0905 00:05:54.720672 2553 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Sep 5 00:05:54.726109 containerd[1469]: time="2025-09-05T00:05:54.726063458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgpvh,Uid:c68d0300-1552-4e00-a5c0-e0be1068f47c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7cf97841e0688292dc0837d1515383c13c111b77ddea77e5cd4525d809a8e394\"" Sep 5 00:05:54.728010 kubelet[2553]: E0905 00:05:54.727962 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:54.730252 systemd[1]: Created slice kubepods-besteffort-pod56b771cb_2897_41af_97a3_0a85d44fe9f6.slice - libcontainer container kubepods-besteffort-pod56b771cb_2897_41af_97a3_0a85d44fe9f6.slice. Sep 5 00:05:54.731780 containerd[1469]: time="2025-09-05T00:05:54.731715163Z" level=info msg="CreateContainer within sandbox \"7cf97841e0688292dc0837d1515383c13c111b77ddea77e5cd4525d809a8e394\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 00:05:54.754913 containerd[1469]: time="2025-09-05T00:05:54.754866443Z" level=info msg="CreateContainer within sandbox \"7cf97841e0688292dc0837d1515383c13c111b77ddea77e5cd4525d809a8e394\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"50fe269e555ea3b367f13583f5c17b28c074394ff0c1783ec47db2266c6222fc\"" Sep 5 00:05:54.755744 containerd[1469]: time="2025-09-05T00:05:54.755697990Z" level=info msg="StartContainer for \"50fe269e555ea3b367f13583f5c17b28c074394ff0c1783ec47db2266c6222fc\"" Sep 5 00:05:54.777570 kubelet[2553]: I0905 00:05:54.777533 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/56b771cb-2897-41af-97a3-0a85d44fe9f6-var-lib-calico\") pod \"tigera-operator-58fc44c59b-nv8nh\" (UID: \"56b771cb-2897-41af-97a3-0a85d44fe9f6\") " pod="tigera-operator/tigera-operator-58fc44c59b-nv8nh" Sep 5 00:05:54.777570 kubelet[2553]: I0905 00:05:54.777584 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vpgf\" (UniqueName: \"kubernetes.io/projected/56b771cb-2897-41af-97a3-0a85d44fe9f6-kube-api-access-9vpgf\") pod \"tigera-operator-58fc44c59b-nv8nh\" (UID: \"56b771cb-2897-41af-97a3-0a85d44fe9f6\") " pod="tigera-operator/tigera-operator-58fc44c59b-nv8nh" Sep 5 00:05:54.789122 systemd[1]: Started cri-containerd-50fe269e555ea3b367f13583f5c17b28c074394ff0c1783ec47db2266c6222fc.scope - libcontainer container 50fe269e555ea3b367f13583f5c17b28c074394ff0c1783ec47db2266c6222fc. Sep 5 00:05:54.825758 containerd[1469]: time="2025-09-05T00:05:54.825717634Z" level=info msg="StartContainer for \"50fe269e555ea3b367f13583f5c17b28c074394ff0c1783ec47db2266c6222fc\" returns successfully" Sep 5 00:05:55.037807 containerd[1469]: time="2025-09-05T00:05:55.037761443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-nv8nh,Uid:56b771cb-2897-41af-97a3-0a85d44fe9f6,Namespace:tigera-operator,Attempt:0,}" Sep 5 00:05:55.066025 containerd[1469]: time="2025-09-05T00:05:55.065913528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:05:55.066025 containerd[1469]: time="2025-09-05T00:05:55.065997321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:05:55.066025 containerd[1469]: time="2025-09-05T00:05:55.066009136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:55.066295 containerd[1469]: time="2025-09-05T00:05:55.066105475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:05:55.086334 systemd[1]: Started cri-containerd-00181b0e5ba2de759316bec43e1d2ac6997ae96f57f0b55a3a4e1cee96475a42.scope - libcontainer container 00181b0e5ba2de759316bec43e1d2ac6997ae96f57f0b55a3a4e1cee96475a42. Sep 5 00:05:55.087077 kubelet[2553]: E0905 00:05:55.086797 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:55.102482 kubelet[2553]: I0905 00:05:55.102412 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dgpvh" podStartSLOduration=1.1023856300000001 podStartE2EDuration="1.10238563s" podCreationTimestamp="2025-09-05 00:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:05:55.097959173 +0000 UTC m=+7.174920955" watchObservedRunningTime="2025-09-05 00:05:55.10238563 +0000 UTC m=+7.179347402" Sep 5 00:05:55.141622 containerd[1469]: time="2025-09-05T00:05:55.141543538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-nv8nh,Uid:56b771cb-2897-41af-97a3-0a85d44fe9f6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"00181b0e5ba2de759316bec43e1d2ac6997ae96f57f0b55a3a4e1cee96475a42\"" Sep 5 00:05:55.143615 containerd[1469]: time="2025-09-05T00:05:55.143574701Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 00:05:57.036493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361347071.mount: Deactivated successfully. Sep 5 00:05:58.390769 kubelet[2553]: E0905 00:05:58.389228 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:05:58.734893 containerd[1469]: time="2025-09-05T00:05:58.734814982Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:58.735955 containerd[1469]: time="2025-09-05T00:05:58.735861039Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 5 00:05:58.737042 containerd[1469]: time="2025-09-05T00:05:58.737008220Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:58.739402 containerd[1469]: time="2025-09-05T00:05:58.739362503Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:05:58.740035 containerd[1469]: time="2025-09-05T00:05:58.739973328Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.596354092s" Sep 5 00:05:58.740035 containerd[1469]: time="2025-09-05T00:05:58.740028435Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 5 00:05:58.742195 containerd[1469]: time="2025-09-05T00:05:58.742130329Z" level=info msg="CreateContainer within sandbox \"00181b0e5ba2de759316bec43e1d2ac6997ae96f57f0b55a3a4e1cee96475a42\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 00:05:58.945122 containerd[1469]: time="2025-09-05T00:05:58.945035975Z" level=info msg="CreateContainer within sandbox \"00181b0e5ba2de759316bec43e1d2ac6997ae96f57f0b55a3a4e1cee96475a42\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"be9de5bca900b346a63f5cc2b82e9c5a748afa27c4a40e34d6a8d8c12f2f1ec6\"" Sep 5 00:05:58.945725 containerd[1469]: time="2025-09-05T00:05:58.945686456Z" level=info msg="StartContainer for \"be9de5bca900b346a63f5cc2b82e9c5a748afa27c4a40e34d6a8d8c12f2f1ec6\"" Sep 5 00:05:58.980162 systemd[1]: Started cri-containerd-be9de5bca900b346a63f5cc2b82e9c5a748afa27c4a40e34d6a8d8c12f2f1ec6.scope - libcontainer container be9de5bca900b346a63f5cc2b82e9c5a748afa27c4a40e34d6a8d8c12f2f1ec6. Sep 5 00:05:59.009689 containerd[1469]: time="2025-09-05T00:05:59.009506652Z" level=info msg="StartContainer for \"be9de5bca900b346a63f5cc2b82e9c5a748afa27c4a40e34d6a8d8c12f2f1ec6\" returns successfully" Sep 5 00:05:59.096030 kubelet[2553]: E0905 00:05:59.095968 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:00.788899 kubelet[2553]: E0905 00:06:00.788846 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:00.801110 kubelet[2553]: I0905 00:06:00.800287 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-nv8nh" podStartSLOduration=3.202493054 podStartE2EDuration="6.800261097s" podCreationTimestamp="2025-09-05 00:05:54 +0000 UTC" firstStartedPulling="2025-09-05 00:05:55.143055102 +0000 UTC m=+7.220016884" lastFinishedPulling="2025-09-05 00:05:58.740823145 +0000 UTC m=+10.817784927" observedRunningTime="2025-09-05 00:05:59.112747414 +0000 UTC m=+11.189709196" watchObservedRunningTime="2025-09-05 00:06:00.800261097 +0000 UTC m=+12.877222879" Sep 5 00:06:01.017302 kubelet[2553]: E0905 00:06:01.017237 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:04.983505 sudo[1665]: pam_unix(sudo:session): session closed for user root Sep 5 00:06:04.986727 sshd[1662]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:04.991274 systemd[1]: sshd@8-10.0.0.14:22-10.0.0.1:34162.service: Deactivated successfully. Sep 5 00:06:04.995171 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 00:06:04.995439 systemd[1]: session-9.scope: Consumed 6.543s CPU time, 156.6M memory peak, 0B memory swap peak. Sep 5 00:06:04.997588 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Sep 5 00:06:04.999559 systemd-logind[1453]: Removed session 9. Sep 5 00:06:07.357965 kubelet[2553]: I0905 00:06:07.357648 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8a745ff5-b862-4017-9646-62fe78e23a61-typha-certs\") pod \"calico-typha-786cf5595b-z67t5\" (UID: \"8a745ff5-b862-4017-9646-62fe78e23a61\") " pod="calico-system/calico-typha-786cf5595b-z67t5" Sep 5 00:06:07.357965 kubelet[2553]: I0905 00:06:07.357701 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a745ff5-b862-4017-9646-62fe78e23a61-tigera-ca-bundle\") pod \"calico-typha-786cf5595b-z67t5\" (UID: \"8a745ff5-b862-4017-9646-62fe78e23a61\") " pod="calico-system/calico-typha-786cf5595b-z67t5" Sep 5 00:06:07.357965 kubelet[2553]: I0905 00:06:07.357722 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jr2z\" (UniqueName: \"kubernetes.io/projected/8a745ff5-b862-4017-9646-62fe78e23a61-kube-api-access-2jr2z\") pod \"calico-typha-786cf5595b-z67t5\" (UID: \"8a745ff5-b862-4017-9646-62fe78e23a61\") " pod="calico-system/calico-typha-786cf5595b-z67t5" Sep 5 00:06:07.361766 systemd[1]: Created slice kubepods-besteffort-pod8a745ff5_b862_4017_9646_62fe78e23a61.slice - libcontainer container kubepods-besteffort-pod8a745ff5_b862_4017_9646_62fe78e23a61.slice. Sep 5 00:06:07.824184 systemd[1]: Created slice kubepods-besteffort-podb8066641_3407_4c71_a83a_88b92dfc96c3.slice - libcontainer container kubepods-besteffort-podb8066641_3407_4c71_a83a_88b92dfc96c3.slice. Sep 5 00:06:07.961656 kubelet[2553]: I0905 00:06:07.961583 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b8066641-3407-4c71-a83a-88b92dfc96c3-cni-net-dir\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.961656 kubelet[2553]: I0905 00:06:07.961646 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b8066641-3407-4c71-a83a-88b92dfc96c3-flexvol-driver-host\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.961656 kubelet[2553]: I0905 00:06:07.961671 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b8066641-3407-4c71-a83a-88b92dfc96c3-var-lib-calico\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.962098 kubelet[2553]: I0905 00:06:07.961692 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b8066641-3407-4c71-a83a-88b92dfc96c3-cni-bin-dir\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.962098 kubelet[2553]: I0905 00:06:07.961711 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8066641-3407-4c71-a83a-88b92dfc96c3-tigera-ca-bundle\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.962098 kubelet[2553]: I0905 00:06:07.961728 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b8066641-3407-4c71-a83a-88b92dfc96c3-var-run-calico\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.962098 kubelet[2553]: I0905 00:06:07.961748 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b8066641-3407-4c71-a83a-88b92dfc96c3-cni-log-dir\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.962098 kubelet[2553]: I0905 00:06:07.961764 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8066641-3407-4c71-a83a-88b92dfc96c3-xtables-lock\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.962307 kubelet[2553]: I0905 00:06:07.961788 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b8066641-3407-4c71-a83a-88b92dfc96c3-policysync\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.962307 kubelet[2553]: I0905 00:06:07.961810 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxghq\" (UniqueName: \"kubernetes.io/projected/b8066641-3407-4c71-a83a-88b92dfc96c3-kube-api-access-xxghq\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.962307 kubelet[2553]: I0905 00:06:07.961829 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8066641-3407-4c71-a83a-88b92dfc96c3-lib-modules\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.962307 kubelet[2553]: I0905 00:06:07.961848 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b8066641-3407-4c71-a83a-88b92dfc96c3-node-certs\") pod \"calico-node-fclvv\" (UID: \"b8066641-3407-4c71-a83a-88b92dfc96c3\") " pod="calico-system/calico-node-fclvv" Sep 5 00:06:07.973514 kubelet[2553]: E0905 00:06:07.973432 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9hbl" podUID="977e9373-23c2-4b46-9d36-9bf58abbfad5" Sep 5 00:06:07.978733 kubelet[2553]: E0905 00:06:07.978278 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:07.983285 containerd[1469]: time="2025-09-05T00:06:07.983241801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-786cf5595b-z67t5,Uid:8a745ff5-b862-4017-9646-62fe78e23a61,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:08.163463 kubelet[2553]: I0905 00:06:08.163329 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/977e9373-23c2-4b46-9d36-9bf58abbfad5-kubelet-dir\") pod \"csi-node-driver-s9hbl\" (UID: \"977e9373-23c2-4b46-9d36-9bf58abbfad5\") " pod="calico-system/csi-node-driver-s9hbl" Sep 5 00:06:08.163463 kubelet[2553]: I0905 00:06:08.163382 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/977e9373-23c2-4b46-9d36-9bf58abbfad5-socket-dir\") pod \"csi-node-driver-s9hbl\" (UID: \"977e9373-23c2-4b46-9d36-9bf58abbfad5\") " pod="calico-system/csi-node-driver-s9hbl" Sep 5 00:06:08.163463 kubelet[2553]: I0905 00:06:08.163407 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/977e9373-23c2-4b46-9d36-9bf58abbfad5-varrun\") pod \"csi-node-driver-s9hbl\" (UID: \"977e9373-23c2-4b46-9d36-9bf58abbfad5\") " pod="calico-system/csi-node-driver-s9hbl" Sep 5 00:06:08.163463 kubelet[2553]: I0905 00:06:08.163424 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/977e9373-23c2-4b46-9d36-9bf58abbfad5-registration-dir\") pod \"csi-node-driver-s9hbl\" (UID: \"977e9373-23c2-4b46-9d36-9bf58abbfad5\") " pod="calico-system/csi-node-driver-s9hbl" Sep 5 00:06:08.163463 kubelet[2553]: I0905 00:06:08.163439 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwdgz\" (UniqueName: \"kubernetes.io/projected/977e9373-23c2-4b46-9d36-9bf58abbfad5-kube-api-access-kwdgz\") pod \"csi-node-driver-s9hbl\" (UID: \"977e9373-23c2-4b46-9d36-9bf58abbfad5\") " pod="calico-system/csi-node-driver-s9hbl" Sep 5 00:06:08.182607 kubelet[2553]: E0905 00:06:08.182551 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.183085 kubelet[2553]: W0905 00:06:08.182682 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.183085 kubelet[2553]: E0905 00:06:08.182707 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.220899 containerd[1469]: time="2025-09-05T00:06:08.220488673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:08.221448 containerd[1469]: time="2025-09-05T00:06:08.221350448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:08.221448 containerd[1469]: time="2025-09-05T00:06:08.221373696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:08.221631 containerd[1469]: time="2025-09-05T00:06:08.221553997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:08.245348 systemd[1]: Started cri-containerd-2181dc1e797e9b1cf79fc8bfd4f9130fb7f6a7d7e7b6819ae2615bb6d922f31f.scope - libcontainer container 2181dc1e797e9b1cf79fc8bfd4f9130fb7f6a7d7e7b6819ae2615bb6d922f31f. Sep 5 00:06:08.264173 kubelet[2553]: E0905 00:06:08.263972 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.264173 kubelet[2553]: W0905 00:06:08.264023 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.264173 kubelet[2553]: E0905 00:06:08.264047 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.264624 kubelet[2553]: E0905 00:06:08.264467 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.264624 kubelet[2553]: W0905 00:06:08.264494 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.264624 kubelet[2553]: E0905 00:06:08.264506 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.264851 kubelet[2553]: E0905 00:06:08.264796 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.264851 kubelet[2553]: W0905 00:06:08.264839 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.264942 kubelet[2553]: E0905 00:06:08.264855 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.265254 kubelet[2553]: E0905 00:06:08.265236 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.265254 kubelet[2553]: W0905 00:06:08.265250 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.265356 kubelet[2553]: E0905 00:06:08.265271 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.265628 kubelet[2553]: E0905 00:06:08.265610 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.265628 kubelet[2553]: W0905 00:06:08.265625 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.265714 kubelet[2553]: E0905 00:06:08.265646 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.265944 kubelet[2553]: E0905 00:06:08.265926 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.265944 kubelet[2553]: W0905 00:06:08.265940 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.266124 kubelet[2553]: E0905 00:06:08.266101 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.266259 kubelet[2553]: E0905 00:06:08.266242 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.266259 kubelet[2553]: W0905 00:06:08.266256 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.266486 kubelet[2553]: E0905 00:06:08.266390 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.266550 kubelet[2553]: E0905 00:06:08.266517 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.266550 kubelet[2553]: W0905 00:06:08.266542 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.266691 kubelet[2553]: E0905 00:06:08.266669 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.266853 kubelet[2553]: E0905 00:06:08.266819 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.266853 kubelet[2553]: W0905 00:06:08.266837 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.267072 kubelet[2553]: E0905 00:06:08.266961 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.267122 kubelet[2553]: E0905 00:06:08.267106 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.267122 kubelet[2553]: W0905 00:06:08.267118 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.267341 kubelet[2553]: E0905 00:06:08.267258 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.267391 kubelet[2553]: E0905 00:06:08.267376 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.267391 kubelet[2553]: W0905 00:06:08.267389 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.267625 kubelet[2553]: E0905 00:06:08.267563 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.267711 kubelet[2553]: E0905 00:06:08.267691 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.267711 kubelet[2553]: W0905 00:06:08.267706 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.267798 kubelet[2553]: E0905 00:06:08.267739 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.268173 kubelet[2553]: E0905 00:06:08.268151 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.268173 kubelet[2553]: W0905 00:06:08.268165 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.268280 kubelet[2553]: E0905 00:06:08.268255 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.268589 kubelet[2553]: E0905 00:06:08.268568 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.268661 kubelet[2553]: W0905 00:06:08.268583 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.268829 kubelet[2553]: E0905 00:06:08.268756 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.269046 kubelet[2553]: E0905 00:06:08.269018 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.269124 kubelet[2553]: W0905 00:06:08.269047 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.269169 kubelet[2553]: E0905 00:06:08.269130 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.269475 kubelet[2553]: E0905 00:06:08.269451 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.269475 kubelet[2553]: W0905 00:06:08.269468 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.269582 kubelet[2553]: E0905 00:06:08.269556 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.269907 kubelet[2553]: E0905 00:06:08.269879 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.269907 kubelet[2553]: W0905 00:06:08.269900 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.270114 kubelet[2553]: E0905 00:06:08.270088 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.270230 kubelet[2553]: E0905 00:06:08.270208 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.270230 kubelet[2553]: W0905 00:06:08.270224 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.270397 kubelet[2553]: E0905 00:06:08.270372 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.270601 kubelet[2553]: E0905 00:06:08.270576 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.270601 kubelet[2553]: W0905 00:06:08.270594 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.270793 kubelet[2553]: E0905 00:06:08.270767 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.270925 kubelet[2553]: E0905 00:06:08.270902 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.270925 kubelet[2553]: W0905 00:06:08.270918 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.271233 kubelet[2553]: E0905 00:06:08.271206 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.271493 kubelet[2553]: E0905 00:06:08.271468 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.271493 kubelet[2553]: W0905 00:06:08.271486 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.271687 kubelet[2553]: E0905 00:06:08.271662 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.271830 kubelet[2553]: E0905 00:06:08.271805 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.271830 kubelet[2553]: W0905 00:06:08.271822 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.272061 kubelet[2553]: E0905 00:06:08.271967 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.272191 kubelet[2553]: E0905 00:06:08.272167 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.272191 kubelet[2553]: W0905 00:06:08.272184 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.272299 kubelet[2553]: E0905 00:06:08.272276 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.272571 kubelet[2553]: E0905 00:06:08.272547 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.272571 kubelet[2553]: W0905 00:06:08.272564 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.272666 kubelet[2553]: E0905 00:06:08.272594 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.272928 kubelet[2553]: E0905 00:06:08.272901 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.272928 kubelet[2553]: W0905 00:06:08.272919 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.273092 kubelet[2553]: E0905 00:06:08.272933 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.282860 kubelet[2553]: E0905 00:06:08.282827 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:08.282860 kubelet[2553]: W0905 00:06:08.282845 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:08.282860 kubelet[2553]: E0905 00:06:08.282860 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:08.295032 containerd[1469]: time="2025-09-05T00:06:08.294905838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-786cf5595b-z67t5,Uid:8a745ff5-b862-4017-9646-62fe78e23a61,Namespace:calico-system,Attempt:0,} returns sandbox id \"2181dc1e797e9b1cf79fc8bfd4f9130fb7f6a7d7e7b6819ae2615bb6d922f31f\"" Sep 5 00:06:08.296345 kubelet[2553]: E0905 00:06:08.296316 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:08.297924 containerd[1469]: time="2025-09-05T00:06:08.297703669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 00:06:08.430157 containerd[1469]: time="2025-09-05T00:06:08.430021236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fclvv,Uid:b8066641-3407-4c71-a83a-88b92dfc96c3,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:08.543417 containerd[1469]: time="2025-09-05T00:06:08.542645694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:08.543417 containerd[1469]: time="2025-09-05T00:06:08.542800854Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:08.543417 containerd[1469]: time="2025-09-05T00:06:08.542842018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:08.543823 containerd[1469]: time="2025-09-05T00:06:08.543765329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:08.560330 systemd[1]: run-containerd-runc-k8s.io-83801920e8716f9629854fbd95de95cb688955564d17801addc3850cee2158e1-runc.3xsCsP.mount: Deactivated successfully. Sep 5 00:06:08.571179 systemd[1]: Started cri-containerd-83801920e8716f9629854fbd95de95cb688955564d17801addc3850cee2158e1.scope - libcontainer container 83801920e8716f9629854fbd95de95cb688955564d17801addc3850cee2158e1. Sep 5 00:06:08.595321 containerd[1469]: time="2025-09-05T00:06:08.595261797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fclvv,Uid:b8066641-3407-4c71-a83a-88b92dfc96c3,Namespace:calico-system,Attempt:0,} returns sandbox id \"83801920e8716f9629854fbd95de95cb688955564d17801addc3850cee2158e1\"" Sep 5 00:06:10.058363 kubelet[2553]: E0905 00:06:10.058219 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9hbl" podUID="977e9373-23c2-4b46-9d36-9bf58abbfad5" Sep 5 00:06:10.317411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1569812581.mount: Deactivated successfully. Sep 5 00:06:11.253020 containerd[1469]: time="2025-09-05T00:06:11.252907081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:11.254565 containerd[1469]: time="2025-09-05T00:06:11.254459714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 5 00:06:11.256234 containerd[1469]: time="2025-09-05T00:06:11.256155250Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:11.258849 containerd[1469]: time="2025-09-05T00:06:11.258774614Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:11.259578 containerd[1469]: time="2025-09-05T00:06:11.259531250Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.961780956s" Sep 5 00:06:11.259666 containerd[1469]: time="2025-09-05T00:06:11.259584890Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 5 00:06:11.261807 containerd[1469]: time="2025-09-05T00:06:11.261664239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 00:06:11.272331 containerd[1469]: time="2025-09-05T00:06:11.272284146Z" level=info msg="CreateContainer within sandbox \"2181dc1e797e9b1cf79fc8bfd4f9130fb7f6a7d7e7b6819ae2615bb6d922f31f\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 00:06:11.291297 containerd[1469]: time="2025-09-05T00:06:11.291230401Z" level=info msg="CreateContainer within sandbox \"2181dc1e797e9b1cf79fc8bfd4f9130fb7f6a7d7e7b6819ae2615bb6d922f31f\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"147ae1b5e7b56ae19ea1dead01994a5d5c5ae96d2f079f6724e120045a2a32cc\"" Sep 5 00:06:11.292848 containerd[1469]: time="2025-09-05T00:06:11.291955012Z" level=info msg="StartContainer for \"147ae1b5e7b56ae19ea1dead01994a5d5c5ae96d2f079f6724e120045a2a32cc\"" Sep 5 00:06:11.333289 systemd[1]: Started cri-containerd-147ae1b5e7b56ae19ea1dead01994a5d5c5ae96d2f079f6724e120045a2a32cc.scope - libcontainer container 147ae1b5e7b56ae19ea1dead01994a5d5c5ae96d2f079f6724e120045a2a32cc. Sep 5 00:06:11.385691 containerd[1469]: time="2025-09-05T00:06:11.385609031Z" level=info msg="StartContainer for \"147ae1b5e7b56ae19ea1dead01994a5d5c5ae96d2f079f6724e120045a2a32cc\" returns successfully" Sep 5 00:06:12.057266 kubelet[2553]: E0905 00:06:12.057202 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9hbl" podUID="977e9373-23c2-4b46-9d36-9bf58abbfad5" Sep 5 00:06:12.142006 kubelet[2553]: E0905 00:06:12.141932 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:12.191930 kubelet[2553]: E0905 00:06:12.191873 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.191930 kubelet[2553]: W0905 00:06:12.191914 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.192115 kubelet[2553]: E0905 00:06:12.191942 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.192310 kubelet[2553]: E0905 00:06:12.192287 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.192310 kubelet[2553]: W0905 00:06:12.192301 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.192310 kubelet[2553]: E0905 00:06:12.192310 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.192555 kubelet[2553]: E0905 00:06:12.192539 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.192586 kubelet[2553]: W0905 00:06:12.192550 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.192586 kubelet[2553]: E0905 00:06:12.192574 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.192877 kubelet[2553]: E0905 00:06:12.192860 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.192877 kubelet[2553]: W0905 00:06:12.192873 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.192949 kubelet[2553]: E0905 00:06:12.192902 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.193200 kubelet[2553]: E0905 00:06:12.193184 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.193200 kubelet[2553]: W0905 00:06:12.193196 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.193275 kubelet[2553]: E0905 00:06:12.193205 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.193473 kubelet[2553]: E0905 00:06:12.193449 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.193473 kubelet[2553]: W0905 00:06:12.193463 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.193473 kubelet[2553]: E0905 00:06:12.193472 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.193694 kubelet[2553]: E0905 00:06:12.193678 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.193694 kubelet[2553]: W0905 00:06:12.193689 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.193796 kubelet[2553]: E0905 00:06:12.193697 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.193934 kubelet[2553]: E0905 00:06:12.193918 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.193975 kubelet[2553]: W0905 00:06:12.193935 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.193975 kubelet[2553]: E0905 00:06:12.193946 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.194228 kubelet[2553]: E0905 00:06:12.194213 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.194228 kubelet[2553]: W0905 00:06:12.194224 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.194295 kubelet[2553]: E0905 00:06:12.194233 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.194464 kubelet[2553]: E0905 00:06:12.194440 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.194464 kubelet[2553]: W0905 00:06:12.194461 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.194512 kubelet[2553]: E0905 00:06:12.194470 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.194680 kubelet[2553]: E0905 00:06:12.194663 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.194680 kubelet[2553]: W0905 00:06:12.194676 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.194742 kubelet[2553]: E0905 00:06:12.194686 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.194897 kubelet[2553]: E0905 00:06:12.194884 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.194897 kubelet[2553]: W0905 00:06:12.194894 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.194941 kubelet[2553]: E0905 00:06:12.194902 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.195148 kubelet[2553]: E0905 00:06:12.195132 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.195148 kubelet[2553]: W0905 00:06:12.195144 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.195214 kubelet[2553]: E0905 00:06:12.195154 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.195372 kubelet[2553]: E0905 00:06:12.195358 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.195372 kubelet[2553]: W0905 00:06:12.195368 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.195417 kubelet[2553]: E0905 00:06:12.195376 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.195581 kubelet[2553]: E0905 00:06:12.195567 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.195581 kubelet[2553]: W0905 00:06:12.195578 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.195634 kubelet[2553]: E0905 00:06:12.195586 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.247871 kubelet[2553]: I0905 00:06:12.247535 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-786cf5595b-z67t5" podStartSLOduration=2.283746577 podStartE2EDuration="5.247511453s" podCreationTimestamp="2025-09-05 00:06:07 +0000 UTC" firstStartedPulling="2025-09-05 00:06:08.297438793 +0000 UTC m=+20.374400575" lastFinishedPulling="2025-09-05 00:06:11.261203668 +0000 UTC m=+23.338165451" observedRunningTime="2025-09-05 00:06:12.246795995 +0000 UTC m=+24.323757777" watchObservedRunningTime="2025-09-05 00:06:12.247511453 +0000 UTC m=+24.324473245" Sep 5 00:06:12.292736 kubelet[2553]: E0905 00:06:12.292670 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.292736 kubelet[2553]: W0905 00:06:12.292705 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.292736 kubelet[2553]: E0905 00:06:12.292733 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.293273 kubelet[2553]: E0905 00:06:12.293229 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.293273 kubelet[2553]: W0905 00:06:12.293259 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.293466 kubelet[2553]: E0905 00:06:12.293315 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.293785 kubelet[2553]: E0905 00:06:12.293760 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.293785 kubelet[2553]: W0905 00:06:12.293781 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.293875 kubelet[2553]: E0905 00:06:12.293803 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.294267 kubelet[2553]: E0905 00:06:12.294224 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.294357 kubelet[2553]: W0905 00:06:12.294262 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.294357 kubelet[2553]: E0905 00:06:12.294304 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.294724 kubelet[2553]: E0905 00:06:12.294702 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.294724 kubelet[2553]: W0905 00:06:12.294719 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.294838 kubelet[2553]: E0905 00:06:12.294739 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.295243 kubelet[2553]: E0905 00:06:12.295217 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.295243 kubelet[2553]: W0905 00:06:12.295234 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.296966 kubelet[2553]: E0905 00:06:12.295336 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.296966 kubelet[2553]: E0905 00:06:12.296237 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.296966 kubelet[2553]: W0905 00:06:12.296249 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.296966 kubelet[2553]: E0905 00:06:12.296291 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.296966 kubelet[2553]: E0905 00:06:12.296504 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.296966 kubelet[2553]: W0905 00:06:12.296513 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.296966 kubelet[2553]: E0905 00:06:12.296553 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.296966 kubelet[2553]: E0905 00:06:12.296709 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.296966 kubelet[2553]: W0905 00:06:12.296717 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.296966 kubelet[2553]: E0905 00:06:12.296730 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.297429 kubelet[2553]: E0905 00:06:12.297018 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.297429 kubelet[2553]: W0905 00:06:12.297034 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.297429 kubelet[2553]: E0905 00:06:12.297058 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.297429 kubelet[2553]: E0905 00:06:12.297316 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.297429 kubelet[2553]: W0905 00:06:12.297327 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.297429 kubelet[2553]: E0905 00:06:12.297341 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.297925 kubelet[2553]: E0905 00:06:12.297892 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.297925 kubelet[2553]: W0905 00:06:12.297910 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.297925 kubelet[2553]: E0905 00:06:12.297930 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.298434 kubelet[2553]: E0905 00:06:12.298409 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.298434 kubelet[2553]: W0905 00:06:12.298425 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.298568 kubelet[2553]: E0905 00:06:12.298537 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.298798 kubelet[2553]: E0905 00:06:12.298764 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.298886 kubelet[2553]: W0905 00:06:12.298798 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.298886 kubelet[2553]: E0905 00:06:12.298835 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.299218 kubelet[2553]: E0905 00:06:12.299194 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.299218 kubelet[2553]: W0905 00:06:12.299212 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.299335 kubelet[2553]: E0905 00:06:12.299234 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.299564 kubelet[2553]: E0905 00:06:12.299539 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.299564 kubelet[2553]: W0905 00:06:12.299557 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.299679 kubelet[2553]: E0905 00:06:12.299579 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.300054 kubelet[2553]: E0905 00:06:12.299999 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.300054 kubelet[2553]: W0905 00:06:12.300028 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.300054 kubelet[2553]: E0905 00:06:12.300053 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:12.300385 kubelet[2553]: E0905 00:06:12.300353 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:12.300385 kubelet[2553]: W0905 00:06:12.300366 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:12.300385 kubelet[2553]: E0905 00:06:12.300377 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.144029 kubelet[2553]: I0905 00:06:13.143955 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:06:13.144618 kubelet[2553]: E0905 00:06:13.144388 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:13.203227 kubelet[2553]: E0905 00:06:13.203175 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.203227 kubelet[2553]: W0905 00:06:13.203214 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.203494 kubelet[2553]: E0905 00:06:13.203247 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.203536 kubelet[2553]: E0905 00:06:13.203508 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.203536 kubelet[2553]: W0905 00:06:13.203520 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.203536 kubelet[2553]: E0905 00:06:13.203533 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.203833 kubelet[2553]: E0905 00:06:13.203798 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.203833 kubelet[2553]: W0905 00:06:13.203813 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.203833 kubelet[2553]: E0905 00:06:13.203825 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.204099 kubelet[2553]: E0905 00:06:13.204078 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.204099 kubelet[2553]: W0905 00:06:13.204091 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.204194 kubelet[2553]: E0905 00:06:13.204104 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.204408 kubelet[2553]: E0905 00:06:13.204375 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.204408 kubelet[2553]: W0905 00:06:13.204389 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.204408 kubelet[2553]: E0905 00:06:13.204402 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.204652 kubelet[2553]: E0905 00:06:13.204629 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.204652 kubelet[2553]: W0905 00:06:13.204642 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.204731 kubelet[2553]: E0905 00:06:13.204654 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.204906 kubelet[2553]: E0905 00:06:13.204881 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.204906 kubelet[2553]: W0905 00:06:13.204894 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.204906 kubelet[2553]: E0905 00:06:13.204905 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.205182 kubelet[2553]: E0905 00:06:13.205156 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.205182 kubelet[2553]: W0905 00:06:13.205169 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.205182 kubelet[2553]: E0905 00:06:13.205180 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.205571 kubelet[2553]: E0905 00:06:13.205554 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.205571 kubelet[2553]: W0905 00:06:13.205568 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.205682 kubelet[2553]: E0905 00:06:13.205579 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.205874 kubelet[2553]: E0905 00:06:13.205839 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.205874 kubelet[2553]: W0905 00:06:13.205853 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.205874 kubelet[2553]: E0905 00:06:13.205865 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.206159 kubelet[2553]: E0905 00:06:13.206124 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.206159 kubelet[2553]: W0905 00:06:13.206148 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.206244 kubelet[2553]: E0905 00:06:13.206159 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.206411 kubelet[2553]: E0905 00:06:13.206388 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.206411 kubelet[2553]: W0905 00:06:13.206399 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.206411 kubelet[2553]: E0905 00:06:13.206409 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.206653 kubelet[2553]: E0905 00:06:13.206630 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.206653 kubelet[2553]: W0905 00:06:13.206641 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.206653 kubelet[2553]: E0905 00:06:13.206650 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.206873 kubelet[2553]: E0905 00:06:13.206853 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.206873 kubelet[2553]: W0905 00:06:13.206864 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.206873 kubelet[2553]: E0905 00:06:13.206873 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.207121 kubelet[2553]: E0905 00:06:13.207106 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.207121 kubelet[2553]: W0905 00:06:13.207116 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.207215 kubelet[2553]: E0905 00:06:13.207135 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.304277 kubelet[2553]: E0905 00:06:13.304099 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.304277 kubelet[2553]: W0905 00:06:13.304124 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.304277 kubelet[2553]: E0905 00:06:13.304156 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.304746 kubelet[2553]: E0905 00:06:13.304609 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.304746 kubelet[2553]: W0905 00:06:13.304622 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.304746 kubelet[2553]: E0905 00:06:13.304635 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.305173 kubelet[2553]: E0905 00:06:13.305106 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.305173 kubelet[2553]: W0905 00:06:13.305150 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.305384 kubelet[2553]: E0905 00:06:13.305195 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.305621 kubelet[2553]: E0905 00:06:13.305601 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.305621 kubelet[2553]: W0905 00:06:13.305617 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.305712 kubelet[2553]: E0905 00:06:13.305636 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.305910 kubelet[2553]: E0905 00:06:13.305885 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.305910 kubelet[2553]: W0905 00:06:13.305900 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.305955 kubelet[2553]: E0905 00:06:13.305944 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.306221 kubelet[2553]: E0905 00:06:13.306197 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.306221 kubelet[2553]: W0905 00:06:13.306211 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.306298 kubelet[2553]: E0905 00:06:13.306247 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.306474 kubelet[2553]: E0905 00:06:13.306458 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.306474 kubelet[2553]: W0905 00:06:13.306471 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.306520 kubelet[2553]: E0905 00:06:13.306503 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.306708 kubelet[2553]: E0905 00:06:13.306691 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.306708 kubelet[2553]: W0905 00:06:13.306701 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.306789 kubelet[2553]: E0905 00:06:13.306717 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.307011 kubelet[2553]: E0905 00:06:13.306966 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.307011 kubelet[2553]: W0905 00:06:13.307007 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.307081 kubelet[2553]: E0905 00:06:13.307028 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.307377 kubelet[2553]: E0905 00:06:13.307355 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.307377 kubelet[2553]: W0905 00:06:13.307372 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.307456 kubelet[2553]: E0905 00:06:13.307392 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.307768 kubelet[2553]: E0905 00:06:13.307749 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.307768 kubelet[2553]: W0905 00:06:13.307764 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.307833 kubelet[2553]: E0905 00:06:13.307809 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.308114 kubelet[2553]: E0905 00:06:13.308077 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.308114 kubelet[2553]: W0905 00:06:13.308105 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.308269 kubelet[2553]: E0905 00:06:13.308236 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.308438 kubelet[2553]: E0905 00:06:13.308419 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.308438 kubelet[2553]: W0905 00:06:13.308431 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.308495 kubelet[2553]: E0905 00:06:13.308448 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.308895 kubelet[2553]: E0905 00:06:13.308877 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.308895 kubelet[2553]: W0905 00:06:13.308891 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.308961 kubelet[2553]: E0905 00:06:13.308907 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.309270 kubelet[2553]: E0905 00:06:13.309253 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.309270 kubelet[2553]: W0905 00:06:13.309268 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.309326 kubelet[2553]: E0905 00:06:13.309297 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.309582 kubelet[2553]: E0905 00:06:13.309563 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.309582 kubelet[2553]: W0905 00:06:13.309576 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.309582 kubelet[2553]: E0905 00:06:13.309586 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.309857 kubelet[2553]: E0905 00:06:13.309823 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.309857 kubelet[2553]: W0905 00:06:13.309841 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.309857 kubelet[2553]: E0905 00:06:13.309854 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.310400 kubelet[2553]: E0905 00:06:13.310380 2553 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 00:06:13.310400 kubelet[2553]: W0905 00:06:13.310397 2553 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 00:06:13.310488 kubelet[2553]: E0905 00:06:13.310410 2553 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 00:06:13.616910 containerd[1469]: time="2025-09-05T00:06:13.616845863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:13.619950 containerd[1469]: time="2025-09-05T00:06:13.619905560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 5 00:06:13.623774 containerd[1469]: time="2025-09-05T00:06:13.623691936Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:13.629021 containerd[1469]: time="2025-09-05T00:06:13.628970279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:13.629950 containerd[1469]: time="2025-09-05T00:06:13.629877395Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.368094183s" Sep 5 00:06:13.629950 containerd[1469]: time="2025-09-05T00:06:13.629942669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 5 00:06:13.632458 containerd[1469]: time="2025-09-05T00:06:13.632417896Z" level=info msg="CreateContainer within sandbox \"83801920e8716f9629854fbd95de95cb688955564d17801addc3850cee2158e1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 00:06:13.657887 containerd[1469]: time="2025-09-05T00:06:13.657830177Z" level=info msg="CreateContainer within sandbox \"83801920e8716f9629854fbd95de95cb688955564d17801addc3850cee2158e1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1e3642e757662fc3a9aba272498e50029be9214026406eb183a585ecab1fab70\"" Sep 5 00:06:13.658497 containerd[1469]: time="2025-09-05T00:06:13.658467114Z" level=info msg="StartContainer for \"1e3642e757662fc3a9aba272498e50029be9214026406eb183a585ecab1fab70\"" Sep 5 00:06:13.696182 systemd[1]: Started cri-containerd-1e3642e757662fc3a9aba272498e50029be9214026406eb183a585ecab1fab70.scope - libcontainer container 1e3642e757662fc3a9aba272498e50029be9214026406eb183a585ecab1fab70. Sep 5 00:06:13.734472 containerd[1469]: time="2025-09-05T00:06:13.734417063Z" level=info msg="StartContainer for \"1e3642e757662fc3a9aba272498e50029be9214026406eb183a585ecab1fab70\" returns successfully" Sep 5 00:06:13.747170 systemd[1]: cri-containerd-1e3642e757662fc3a9aba272498e50029be9214026406eb183a585ecab1fab70.scope: Deactivated successfully. Sep 5 00:06:13.776687 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e3642e757662fc3a9aba272498e50029be9214026406eb183a585ecab1fab70-rootfs.mount: Deactivated successfully. Sep 5 00:06:14.058076 kubelet[2553]: E0905 00:06:14.058009 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9hbl" podUID="977e9373-23c2-4b46-9d36-9bf58abbfad5" Sep 5 00:06:14.108716 containerd[1469]: time="2025-09-05T00:06:14.108602776Z" level=info msg="shim disconnected" id=1e3642e757662fc3a9aba272498e50029be9214026406eb183a585ecab1fab70 namespace=k8s.io Sep 5 00:06:14.108716 containerd[1469]: time="2025-09-05T00:06:14.108707359Z" level=warning msg="cleaning up after shim disconnected" id=1e3642e757662fc3a9aba272498e50029be9214026406eb183a585ecab1fab70 namespace=k8s.io Sep 5 00:06:14.108716 containerd[1469]: time="2025-09-05T00:06:14.108719804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:06:14.152067 containerd[1469]: time="2025-09-05T00:06:14.152011216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 00:06:14.518531 kubelet[2553]: I0905 00:06:14.518483 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:06:14.519092 kubelet[2553]: E0905 00:06:14.518860 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:15.150705 kubelet[2553]: E0905 00:06:15.150646 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:16.057963 kubelet[2553]: E0905 00:06:16.057899 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9hbl" podUID="977e9373-23c2-4b46-9d36-9bf58abbfad5" Sep 5 00:06:17.883279 containerd[1469]: time="2025-09-05T00:06:17.883207947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:17.884125 containerd[1469]: time="2025-09-05T00:06:17.884026121Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 5 00:06:17.885094 containerd[1469]: time="2025-09-05T00:06:17.885063858Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:17.887641 containerd[1469]: time="2025-09-05T00:06:17.887592608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:17.888448 containerd[1469]: time="2025-09-05T00:06:17.888410001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.736355207s" Sep 5 00:06:17.888448 containerd[1469]: time="2025-09-05T00:06:17.888439280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 5 00:06:17.891089 containerd[1469]: time="2025-09-05T00:06:17.891045727Z" level=info msg="CreateContainer within sandbox \"83801920e8716f9629854fbd95de95cb688955564d17801addc3850cee2158e1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 00:06:17.906918 containerd[1469]: time="2025-09-05T00:06:17.906828645Z" level=info msg="CreateContainer within sandbox \"83801920e8716f9629854fbd95de95cb688955564d17801addc3850cee2158e1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"821effba3c35fe198d373e66193fb2ce0b55852715b615e69436183c2d8e3109\"" Sep 5 00:06:17.907245 containerd[1469]: time="2025-09-05T00:06:17.907216710Z" level=info msg="StartContainer for \"821effba3c35fe198d373e66193fb2ce0b55852715b615e69436183c2d8e3109\"" Sep 5 00:06:17.939784 systemd[1]: Started cri-containerd-821effba3c35fe198d373e66193fb2ce0b55852715b615e69436183c2d8e3109.scope - libcontainer container 821effba3c35fe198d373e66193fb2ce0b55852715b615e69436183c2d8e3109. Sep 5 00:06:18.034623 containerd[1469]: time="2025-09-05T00:06:18.034562249Z" level=info msg="StartContainer for \"821effba3c35fe198d373e66193fb2ce0b55852715b615e69436183c2d8e3109\" returns successfully" Sep 5 00:06:18.057685 kubelet[2553]: E0905 00:06:18.057562 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s9hbl" podUID="977e9373-23c2-4b46-9d36-9bf58abbfad5" Sep 5 00:06:19.587959 systemd[1]: cri-containerd-821effba3c35fe198d373e66193fb2ce0b55852715b615e69436183c2d8e3109.scope: Deactivated successfully. Sep 5 00:06:19.609322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-821effba3c35fe198d373e66193fb2ce0b55852715b615e69436183c2d8e3109-rootfs.mount: Deactivated successfully. Sep 5 00:06:19.661466 kubelet[2553]: I0905 00:06:19.661423 2553 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 5 00:06:20.063175 systemd[1]: Created slice kubepods-besteffort-pod977e9373_23c2_4b46_9d36_9bf58abbfad5.slice - libcontainer container kubepods-besteffort-pod977e9373_23c2_4b46_9d36_9bf58abbfad5.slice. Sep 5 00:06:20.067495 containerd[1469]: time="2025-09-05T00:06:20.067437434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9hbl,Uid:977e9373-23c2-4b46-9d36-9bf58abbfad5,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:20.184806 systemd[1]: Created slice kubepods-besteffort-pod28bf50aa_c0b3_4b34_a1a5_10a9dcc0d148.slice - libcontainer container kubepods-besteffort-pod28bf50aa_c0b3_4b34_a1a5_10a9dcc0d148.slice. Sep 5 00:06:20.288901 systemd[1]: Created slice kubepods-besteffort-podff309b76_ee57_400d_897b_26dbc2ef6eeb.slice - libcontainer container kubepods-besteffort-podff309b76_ee57_400d_897b_26dbc2ef6eeb.slice. Sep 5 00:06:20.290727 containerd[1469]: time="2025-09-05T00:06:20.290670377Z" level=info msg="shim disconnected" id=821effba3c35fe198d373e66193fb2ce0b55852715b615e69436183c2d8e3109 namespace=k8s.io Sep 5 00:06:20.290727 containerd[1469]: time="2025-09-05T00:06:20.290726711Z" level=warning msg="cleaning up after shim disconnected" id=821effba3c35fe198d373e66193fb2ce0b55852715b615e69436183c2d8e3109 namespace=k8s.io Sep 5 00:06:20.290833 containerd[1469]: time="2025-09-05T00:06:20.290735168Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:06:20.298520 systemd[1]: Created slice kubepods-burstable-pod70048a66_95e5_4bf3_806b_b05cba03386e.slice - libcontainer container kubepods-burstable-pod70048a66_95e5_4bf3_806b_b05cba03386e.slice. Sep 5 00:06:20.309731 systemd[1]: Created slice kubepods-besteffort-pod5af96d62_36e9_4622_9b9f_c02e6f235a53.slice - libcontainer container kubepods-besteffort-pod5af96d62_36e9_4622_9b9f_c02e6f235a53.slice. Sep 5 00:06:20.317970 systemd[1]: Created slice kubepods-burstable-podfb5a9823_833b_45bb_9546_46d6c264eca2.slice - libcontainer container kubepods-burstable-podfb5a9823_833b_45bb_9546_46d6c264eca2.slice. Sep 5 00:06:20.326145 systemd[1]: Created slice kubepods-besteffort-pod43623a80_72d4_46e0_adcc_392202d1d1f2.slice - libcontainer container kubepods-besteffort-pod43623a80_72d4_46e0_adcc_392202d1d1f2.slice. Sep 5 00:06:20.334403 systemd[1]: Created slice kubepods-besteffort-podeea7809f_54a5_40ed_94cd_7efa0cc56047.slice - libcontainer container kubepods-besteffort-podeea7809f_54a5_40ed_94cd_7efa0cc56047.slice. Sep 5 00:06:20.343319 systemd[1]: Created slice kubepods-besteffort-podb658ca0f_5303_457c_9cf1_6dddd6c1387f.slice - libcontainer container kubepods-besteffort-podb658ca0f_5303_457c_9cf1_6dddd6c1387f.slice. Sep 5 00:06:20.355950 kubelet[2553]: I0905 00:06:20.355894 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148-calico-apiserver-certs\") pod \"calico-apiserver-579d867b4c-ljpgm\" (UID: \"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148\") " pod="calico-apiserver/calico-apiserver-579d867b4c-ljpgm" Sep 5 00:06:20.355950 kubelet[2553]: I0905 00:06:20.355938 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrxqv\" (UniqueName: \"kubernetes.io/projected/28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148-kube-api-access-wrxqv\") pod \"calico-apiserver-579d867b4c-ljpgm\" (UID: \"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148\") " pod="calico-apiserver/calico-apiserver-579d867b4c-ljpgm" Sep 5 00:06:20.457310 kubelet[2553]: I0905 00:06:20.457253 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb5a9823-833b-45bb-9546-46d6c264eca2-config-volume\") pod \"coredns-7c65d6cfc9-6gz8l\" (UID: \"fb5a9823-833b-45bb-9546-46d6c264eca2\") " pod="kube-system/coredns-7c65d6cfc9-6gz8l" Sep 5 00:06:20.457310 kubelet[2553]: I0905 00:06:20.457316 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnzbv\" (UniqueName: \"kubernetes.io/projected/fb5a9823-833b-45bb-9546-46d6c264eca2-kube-api-access-fnzbv\") pod \"coredns-7c65d6cfc9-6gz8l\" (UID: \"fb5a9823-833b-45bb-9546-46d6c264eca2\") " pod="kube-system/coredns-7c65d6cfc9-6gz8l" Sep 5 00:06:20.457558 kubelet[2553]: I0905 00:06:20.457361 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70048a66-95e5-4bf3-806b-b05cba03386e-config-volume\") pod \"coredns-7c65d6cfc9-8skvt\" (UID: \"70048a66-95e5-4bf3-806b-b05cba03386e\") " pod="kube-system/coredns-7c65d6cfc9-8skvt" Sep 5 00:06:20.457558 kubelet[2553]: I0905 00:06:20.457383 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/43623a80-72d4-46e0-adcc-392202d1d1f2-calico-apiserver-certs\") pod \"calico-apiserver-56d8f46b5d-7zqzw\" (UID: \"43623a80-72d4-46e0-adcc-392202d1d1f2\") " pod="calico-apiserver/calico-apiserver-56d8f46b5d-7zqzw" Sep 5 00:06:20.457558 kubelet[2553]: I0905 00:06:20.457407 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm65h\" (UniqueName: \"kubernetes.io/projected/eea7809f-54a5-40ed-94cd-7efa0cc56047-kube-api-access-bm65h\") pod \"calico-apiserver-579d867b4c-lvxcz\" (UID: \"eea7809f-54a5-40ed-94cd-7efa0cc56047\") " pod="calico-apiserver/calico-apiserver-579d867b4c-lvxcz" Sep 5 00:06:20.457558 kubelet[2553]: I0905 00:06:20.457430 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggjsf\" (UniqueName: \"kubernetes.io/projected/43623a80-72d4-46e0-adcc-392202d1d1f2-kube-api-access-ggjsf\") pod \"calico-apiserver-56d8f46b5d-7zqzw\" (UID: \"43623a80-72d4-46e0-adcc-392202d1d1f2\") " pod="calico-apiserver/calico-apiserver-56d8f46b5d-7zqzw" Sep 5 00:06:20.457558 kubelet[2553]: I0905 00:06:20.457453 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5af96d62-36e9-4622-9b9f-c02e6f235a53-whisker-backend-key-pair\") pod \"whisker-66fd4cc6cf-ntv2j\" (UID: \"5af96d62-36e9-4622-9b9f-c02e6f235a53\") " pod="calico-system/whisker-66fd4cc6cf-ntv2j" Sep 5 00:06:20.457760 kubelet[2553]: I0905 00:06:20.457479 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eea7809f-54a5-40ed-94cd-7efa0cc56047-calico-apiserver-certs\") pod \"calico-apiserver-579d867b4c-lvxcz\" (UID: \"eea7809f-54a5-40ed-94cd-7efa0cc56047\") " pod="calico-apiserver/calico-apiserver-579d867b4c-lvxcz" Sep 5 00:06:20.457760 kubelet[2553]: I0905 00:06:20.457514 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ff309b76-ee57-400d-897b-26dbc2ef6eeb-tigera-ca-bundle\") pod \"calico-kube-controllers-59b7b569f5-wfvvx\" (UID: \"ff309b76-ee57-400d-897b-26dbc2ef6eeb\") " pod="calico-system/calico-kube-controllers-59b7b569f5-wfvvx" Sep 5 00:06:20.457760 kubelet[2553]: I0905 00:06:20.457544 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kbsp\" (UniqueName: \"kubernetes.io/projected/ff309b76-ee57-400d-897b-26dbc2ef6eeb-kube-api-access-7kbsp\") pod \"calico-kube-controllers-59b7b569f5-wfvvx\" (UID: \"ff309b76-ee57-400d-897b-26dbc2ef6eeb\") " pod="calico-system/calico-kube-controllers-59b7b569f5-wfvvx" Sep 5 00:06:20.457760 kubelet[2553]: I0905 00:06:20.457572 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b658ca0f-5303-457c-9cf1-6dddd6c1387f-goldmane-key-pair\") pod \"goldmane-7988f88666-jxkr8\" (UID: \"b658ca0f-5303-457c-9cf1-6dddd6c1387f\") " pod="calico-system/goldmane-7988f88666-jxkr8" Sep 5 00:06:20.457760 kubelet[2553]: I0905 00:06:20.457595 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k4cq\" (UniqueName: \"kubernetes.io/projected/b658ca0f-5303-457c-9cf1-6dddd6c1387f-kube-api-access-7k4cq\") pod \"goldmane-7988f88666-jxkr8\" (UID: \"b658ca0f-5303-457c-9cf1-6dddd6c1387f\") " pod="calico-system/goldmane-7988f88666-jxkr8" Sep 5 00:06:20.457958 kubelet[2553]: I0905 00:06:20.457621 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b658ca0f-5303-457c-9cf1-6dddd6c1387f-goldmane-ca-bundle\") pod \"goldmane-7988f88666-jxkr8\" (UID: \"b658ca0f-5303-457c-9cf1-6dddd6c1387f\") " pod="calico-system/goldmane-7988f88666-jxkr8" Sep 5 00:06:20.457958 kubelet[2553]: I0905 00:06:20.457660 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5af96d62-36e9-4622-9b9f-c02e6f235a53-whisker-ca-bundle\") pod \"whisker-66fd4cc6cf-ntv2j\" (UID: \"5af96d62-36e9-4622-9b9f-c02e6f235a53\") " pod="calico-system/whisker-66fd4cc6cf-ntv2j" Sep 5 00:06:20.457958 kubelet[2553]: I0905 00:06:20.457685 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rng79\" (UniqueName: \"kubernetes.io/projected/70048a66-95e5-4bf3-806b-b05cba03386e-kube-api-access-rng79\") pod \"coredns-7c65d6cfc9-8skvt\" (UID: \"70048a66-95e5-4bf3-806b-b05cba03386e\") " pod="kube-system/coredns-7c65d6cfc9-8skvt" Sep 5 00:06:20.457958 kubelet[2553]: I0905 00:06:20.457717 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z9sz\" (UniqueName: \"kubernetes.io/projected/5af96d62-36e9-4622-9b9f-c02e6f235a53-kube-api-access-5z9sz\") pod \"whisker-66fd4cc6cf-ntv2j\" (UID: \"5af96d62-36e9-4622-9b9f-c02e6f235a53\") " pod="calico-system/whisker-66fd4cc6cf-ntv2j" Sep 5 00:06:20.457958 kubelet[2553]: I0905 00:06:20.457761 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b658ca0f-5303-457c-9cf1-6dddd6c1387f-config\") pod \"goldmane-7988f88666-jxkr8\" (UID: \"b658ca0f-5303-457c-9cf1-6dddd6c1387f\") " pod="calico-system/goldmane-7988f88666-jxkr8" Sep 5 00:06:20.529410 containerd[1469]: time="2025-09-05T00:06:20.529343975Z" level=error msg="Failed to destroy network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.529872 containerd[1469]: time="2025-09-05T00:06:20.529822769Z" level=error msg="encountered an error cleaning up failed sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.529944 containerd[1469]: time="2025-09-05T00:06:20.529887670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9hbl,Uid:977e9373-23c2-4b46-9d36-9bf58abbfad5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.538939 kubelet[2553]: E0905 00:06:20.538891 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.539031 kubelet[2553]: E0905 00:06:20.538963 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9hbl" Sep 5 00:06:20.539031 kubelet[2553]: E0905 00:06:20.539005 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s9hbl" Sep 5 00:06:20.539266 kubelet[2553]: E0905 00:06:20.539071 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s9hbl_calico-system(977e9373-23c2-4b46-9d36-9bf58abbfad5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s9hbl_calico-system(977e9373-23c2-4b46-9d36-9bf58abbfad5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s9hbl" podUID="977e9373-23c2-4b46-9d36-9bf58abbfad5" Sep 5 00:06:20.593163 containerd[1469]: time="2025-09-05T00:06:20.593027541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b7b569f5-wfvvx,Uid:ff309b76-ee57-400d-897b-26dbc2ef6eeb,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:20.604321 kubelet[2553]: E0905 00:06:20.604281 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:20.604789 containerd[1469]: time="2025-09-05T00:06:20.604734922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8skvt,Uid:70048a66-95e5-4bf3-806b-b05cba03386e,Namespace:kube-system,Attempt:0,}" Sep 5 00:06:20.615154 containerd[1469]: time="2025-09-05T00:06:20.614809908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66fd4cc6cf-ntv2j,Uid:5af96d62-36e9-4622-9b9f-c02e6f235a53,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:20.622640 kubelet[2553]: E0905 00:06:20.622599 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:20.625336 containerd[1469]: time="2025-09-05T00:06:20.625264648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6gz8l,Uid:fb5a9823-833b-45bb-9546-46d6c264eca2,Namespace:kube-system,Attempt:0,}" Sep 5 00:06:20.629012 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901-shm.mount: Deactivated successfully. Sep 5 00:06:20.630520 containerd[1469]: time="2025-09-05T00:06:20.630480125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8f46b5d-7zqzw,Uid:43623a80-72d4-46e0-adcc-392202d1d1f2,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:06:20.638373 containerd[1469]: time="2025-09-05T00:06:20.638337428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579d867b4c-lvxcz,Uid:eea7809f-54a5-40ed-94cd-7efa0cc56047,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:06:20.647240 containerd[1469]: time="2025-09-05T00:06:20.647199614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-jxkr8,Uid:b658ca0f-5303-457c-9cf1-6dddd6c1387f,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:20.671292 containerd[1469]: time="2025-09-05T00:06:20.671169845Z" level=error msg="Failed to destroy network for sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.671962 containerd[1469]: time="2025-09-05T00:06:20.671795915Z" level=error msg="encountered an error cleaning up failed sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.671962 containerd[1469]: time="2025-09-05T00:06:20.671860354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b7b569f5-wfvvx,Uid:ff309b76-ee57-400d-897b-26dbc2ef6eeb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.672252 kubelet[2553]: E0905 00:06:20.672123 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.672252 kubelet[2553]: E0905 00:06:20.672206 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59b7b569f5-wfvvx" Sep 5 00:06:20.672252 kubelet[2553]: E0905 00:06:20.672231 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59b7b569f5-wfvvx" Sep 5 00:06:20.672724 kubelet[2553]: E0905 00:06:20.672367 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59b7b569f5-wfvvx_calico-system(ff309b76-ee57-400d-897b-26dbc2ef6eeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59b7b569f5-wfvvx_calico-system(ff309b76-ee57-400d-897b-26dbc2ef6eeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59b7b569f5-wfvvx" podUID="ff309b76-ee57-400d-897b-26dbc2ef6eeb" Sep 5 00:06:20.677294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f-shm.mount: Deactivated successfully. Sep 5 00:06:20.776005 containerd[1469]: time="2025-09-05T00:06:20.775934937Z" level=error msg="Failed to destroy network for sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.776701 containerd[1469]: time="2025-09-05T00:06:20.776558102Z" level=error msg="encountered an error cleaning up failed sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.776701 containerd[1469]: time="2025-09-05T00:06:20.776609886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8skvt,Uid:70048a66-95e5-4bf3-806b-b05cba03386e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.777371 kubelet[2553]: E0905 00:06:20.776898 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.777371 kubelet[2553]: E0905 00:06:20.777012 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8skvt" Sep 5 00:06:20.777371 kubelet[2553]: E0905 00:06:20.777034 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8skvt" Sep 5 00:06:20.777489 kubelet[2553]: E0905 00:06:20.777111 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8skvt_kube-system(70048a66-95e5-4bf3-806b-b05cba03386e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8skvt_kube-system(70048a66-95e5-4bf3-806b-b05cba03386e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8skvt" podUID="70048a66-95e5-4bf3-806b-b05cba03386e" Sep 5 00:06:20.788013 containerd[1469]: time="2025-09-05T00:06:20.787810247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579d867b4c-ljpgm,Uid:28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148,Namespace:calico-apiserver,Attempt:0,}" Sep 5 00:06:20.853617 containerd[1469]: time="2025-09-05T00:06:20.853474969Z" level=error msg="Failed to destroy network for sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.854302 containerd[1469]: time="2025-09-05T00:06:20.854258306Z" level=error msg="encountered an error cleaning up failed sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.854361 containerd[1469]: time="2025-09-05T00:06:20.854331754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66fd4cc6cf-ntv2j,Uid:5af96d62-36e9-4622-9b9f-c02e6f235a53,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.858909 kubelet[2553]: E0905 00:06:20.858517 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.858909 kubelet[2553]: E0905 00:06:20.858603 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66fd4cc6cf-ntv2j" Sep 5 00:06:20.858909 kubelet[2553]: E0905 00:06:20.858641 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66fd4cc6cf-ntv2j" Sep 5 00:06:20.859229 kubelet[2553]: E0905 00:06:20.859045 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66fd4cc6cf-ntv2j_calico-system(5af96d62-36e9-4622-9b9f-c02e6f235a53)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66fd4cc6cf-ntv2j_calico-system(5af96d62-36e9-4622-9b9f-c02e6f235a53)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66fd4cc6cf-ntv2j" podUID="5af96d62-36e9-4622-9b9f-c02e6f235a53" Sep 5 00:06:20.896627 containerd[1469]: time="2025-09-05T00:06:20.896551623Z" level=error msg="Failed to destroy network for sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.897082 containerd[1469]: time="2025-09-05T00:06:20.897043041Z" level=error msg="encountered an error cleaning up failed sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.897136 containerd[1469]: time="2025-09-05T00:06:20.897105267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8f46b5d-7zqzw,Uid:43623a80-72d4-46e0-adcc-392202d1d1f2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.897486 kubelet[2553]: E0905 00:06:20.897400 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.897486 kubelet[2553]: E0905 00:06:20.897467 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56d8f46b5d-7zqzw" Sep 5 00:06:20.897636 kubelet[2553]: E0905 00:06:20.897489 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56d8f46b5d-7zqzw" Sep 5 00:06:20.897636 kubelet[2553]: E0905 00:06:20.897553 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56d8f46b5d-7zqzw_calico-apiserver(43623a80-72d4-46e0-adcc-392202d1d1f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56d8f46b5d-7zqzw_calico-apiserver(43623a80-72d4-46e0-adcc-392202d1d1f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56d8f46b5d-7zqzw" podUID="43623a80-72d4-46e0-adcc-392202d1d1f2" Sep 5 00:06:20.898793 containerd[1469]: time="2025-09-05T00:06:20.898693092Z" level=error msg="Failed to destroy network for sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.899233 containerd[1469]: time="2025-09-05T00:06:20.899177267Z" level=error msg="encountered an error cleaning up failed sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.900140 containerd[1469]: time="2025-09-05T00:06:20.900106967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6gz8l,Uid:fb5a9823-833b-45bb-9546-46d6c264eca2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.900682 kubelet[2553]: E0905 00:06:20.900532 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.900682 kubelet[2553]: E0905 00:06:20.900580 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6gz8l" Sep 5 00:06:20.900682 kubelet[2553]: E0905 00:06:20.900605 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6gz8l" Sep 5 00:06:20.900791 kubelet[2553]: E0905 00:06:20.900642 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6gz8l_kube-system(fb5a9823-833b-45bb-9546-46d6c264eca2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6gz8l_kube-system(fb5a9823-833b-45bb-9546-46d6c264eca2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6gz8l" podUID="fb5a9823-833b-45bb-9546-46d6c264eca2" Sep 5 00:06:20.904737 containerd[1469]: time="2025-09-05T00:06:20.904492063Z" level=error msg="Failed to destroy network for sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.905218 containerd[1469]: time="2025-09-05T00:06:20.905157091Z" level=error msg="encountered an error cleaning up failed sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.905218 containerd[1469]: time="2025-09-05T00:06:20.905211691Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579d867b4c-lvxcz,Uid:eea7809f-54a5-40ed-94cd-7efa0cc56047,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.905496 kubelet[2553]: E0905 00:06:20.905443 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.905560 kubelet[2553]: E0905 00:06:20.905505 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-579d867b4c-lvxcz" Sep 5 00:06:20.905560 kubelet[2553]: E0905 00:06:20.905530 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-579d867b4c-lvxcz" Sep 5 00:06:20.905629 kubelet[2553]: E0905 00:06:20.905572 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-579d867b4c-lvxcz_calico-apiserver(eea7809f-54a5-40ed-94cd-7efa0cc56047)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-579d867b4c-lvxcz_calico-apiserver(eea7809f-54a5-40ed-94cd-7efa0cc56047)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-579d867b4c-lvxcz" podUID="eea7809f-54a5-40ed-94cd-7efa0cc56047" Sep 5 00:06:20.917019 containerd[1469]: time="2025-09-05T00:06:20.916897630Z" level=error msg="Failed to destroy network for sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.917512 containerd[1469]: time="2025-09-05T00:06:20.917457175Z" level=error msg="encountered an error cleaning up failed sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.917557 containerd[1469]: time="2025-09-05T00:06:20.917532167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-jxkr8,Uid:b658ca0f-5303-457c-9cf1-6dddd6c1387f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.917940 kubelet[2553]: E0905 00:06:20.917883 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.918008 kubelet[2553]: E0905 00:06:20.917975 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-jxkr8" Sep 5 00:06:20.918044 kubelet[2553]: E0905 00:06:20.918007 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-jxkr8" Sep 5 00:06:20.918096 kubelet[2553]: E0905 00:06:20.918070 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-jxkr8_calico-system(b658ca0f-5303-457c-9cf1-6dddd6c1387f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-jxkr8_calico-system(b658ca0f-5303-457c-9cf1-6dddd6c1387f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-jxkr8" podUID="b658ca0f-5303-457c-9cf1-6dddd6c1387f" Sep 5 00:06:20.934158 containerd[1469]: time="2025-09-05T00:06:20.934090420Z" level=error msg="Failed to destroy network for sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.934507 containerd[1469]: time="2025-09-05T00:06:20.934478201Z" level=error msg="encountered an error cleaning up failed sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.934559 containerd[1469]: time="2025-09-05T00:06:20.934538983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579d867b4c-ljpgm,Uid:28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.934822 kubelet[2553]: E0905 00:06:20.934782 2553 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:20.934888 kubelet[2553]: E0905 00:06:20.934851 2553 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-579d867b4c-ljpgm" Sep 5 00:06:20.934888 kubelet[2553]: E0905 00:06:20.934871 2553 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-579d867b4c-ljpgm" Sep 5 00:06:20.934941 kubelet[2553]: E0905 00:06:20.934915 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-579d867b4c-ljpgm_calico-apiserver(28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-579d867b4c-ljpgm_calico-apiserver(28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-579d867b4c-ljpgm" podUID="28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148" Sep 5 00:06:21.162849 kubelet[2553]: I0905 00:06:21.162597 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:21.165286 kubelet[2553]: I0905 00:06:21.165229 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:21.167241 kubelet[2553]: I0905 00:06:21.167005 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:21.169278 kubelet[2553]: I0905 00:06:21.168420 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:21.174173 containerd[1469]: time="2025-09-05T00:06:21.174120568Z" level=info msg="StopPodSandbox for \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\"" Sep 5 00:06:21.175029 containerd[1469]: time="2025-09-05T00:06:21.174353887Z" level=info msg="StopPodSandbox for \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\"" Sep 5 00:06:21.175916 containerd[1469]: time="2025-09-05T00:06:21.175845465Z" level=info msg="StopPodSandbox for \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\"" Sep 5 00:06:21.178031 containerd[1469]: time="2025-09-05T00:06:21.177995967Z" level=info msg="StopPodSandbox for \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\"" Sep 5 00:06:21.183903 containerd[1469]: time="2025-09-05T00:06:21.183842058Z" level=info msg="Ensure that sandbox 555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da in task-service has been cleanup successfully" Sep 5 00:06:21.184929 containerd[1469]: time="2025-09-05T00:06:21.184881207Z" level=info msg="Ensure that sandbox a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3 in task-service has been cleanup successfully" Sep 5 00:06:21.188560 containerd[1469]: time="2025-09-05T00:06:21.188500292Z" level=info msg="Ensure that sandbox 6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7 in task-service has been cleanup successfully" Sep 5 00:06:21.189615 containerd[1469]: time="2025-09-05T00:06:21.189567155Z" level=info msg="Ensure that sandbox 2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743 in task-service has been cleanup successfully" Sep 5 00:06:21.189924 kubelet[2553]: I0905 00:06:21.189833 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:21.196806 containerd[1469]: time="2025-09-05T00:06:21.196187663Z" level=info msg="StopPodSandbox for \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\"" Sep 5 00:06:21.196806 containerd[1469]: time="2025-09-05T00:06:21.196441333Z" level=info msg="Ensure that sandbox 6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e in task-service has been cleanup successfully" Sep 5 00:06:21.199020 kubelet[2553]: I0905 00:06:21.198963 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:21.200423 containerd[1469]: time="2025-09-05T00:06:21.200369809Z" level=info msg="StopPodSandbox for \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\"" Sep 5 00:06:21.200893 containerd[1469]: time="2025-09-05T00:06:21.200610985Z" level=info msg="Ensure that sandbox 02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f in task-service has been cleanup successfully" Sep 5 00:06:21.204150 kubelet[2553]: I0905 00:06:21.204083 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:21.207033 containerd[1469]: time="2025-09-05T00:06:21.206229698Z" level=info msg="StopPodSandbox for \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\"" Sep 5 00:06:21.207033 containerd[1469]: time="2025-09-05T00:06:21.206816077Z" level=info msg="Ensure that sandbox fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145 in task-service has been cleanup successfully" Sep 5 00:06:21.211471 kubelet[2553]: I0905 00:06:21.211420 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:21.213333 containerd[1469]: time="2025-09-05T00:06:21.213297705Z" level=info msg="StopPodSandbox for \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\"" Sep 5 00:06:21.213638 containerd[1469]: time="2025-09-05T00:06:21.213619402Z" level=info msg="Ensure that sandbox ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901 in task-service has been cleanup successfully" Sep 5 00:06:21.222563 kubelet[2553]: I0905 00:06:21.222530 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:21.224065 containerd[1469]: time="2025-09-05T00:06:21.223280801Z" level=info msg="StopPodSandbox for \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\"" Sep 5 00:06:21.225728 containerd[1469]: time="2025-09-05T00:06:21.225557879Z" level=info msg="Ensure that sandbox 8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19 in task-service has been cleanup successfully" Sep 5 00:06:21.226068 containerd[1469]: time="2025-09-05T00:06:21.225603089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 00:06:21.285474 containerd[1469]: time="2025-09-05T00:06:21.285403882Z" level=error msg="StopPodSandbox for \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\" failed" error="failed to destroy network for sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:21.285996 kubelet[2553]: E0905 00:06:21.285938 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:21.286261 kubelet[2553]: E0905 00:06:21.286157 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da"} Sep 5 00:06:21.286398 kubelet[2553]: E0905 00:06:21.286380 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb5a9823-833b-45bb-9546-46d6c264eca2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:21.286532 kubelet[2553]: E0905 00:06:21.286511 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb5a9823-833b-45bb-9546-46d6c264eca2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6gz8l" podUID="fb5a9823-833b-45bb-9546-46d6c264eca2" Sep 5 00:06:21.307430 containerd[1469]: time="2025-09-05T00:06:21.307273841Z" level=error msg="StopPodSandbox for \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\" failed" error="failed to destroy network for sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:21.308323 containerd[1469]: time="2025-09-05T00:06:21.308279934Z" level=error msg="StopPodSandbox for \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\" failed" error="failed to destroy network for sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:21.308412 kubelet[2553]: E0905 00:06:21.308298 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:21.308412 kubelet[2553]: E0905 00:06:21.308365 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3"} Sep 5 00:06:21.308412 kubelet[2553]: E0905 00:06:21.308406 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5af96d62-36e9-4622-9b9f-c02e6f235a53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:21.308554 kubelet[2553]: E0905 00:06:21.308431 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5af96d62-36e9-4622-9b9f-c02e6f235a53\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66fd4cc6cf-ntv2j" podUID="5af96d62-36e9-4622-9b9f-c02e6f235a53" Sep 5 00:06:21.308554 kubelet[2553]: E0905 00:06:21.308485 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:21.308554 kubelet[2553]: E0905 00:06:21.308502 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7"} Sep 5 00:06:21.308554 kubelet[2553]: E0905 00:06:21.308518 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"70048a66-95e5-4bf3-806b-b05cba03386e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:21.308738 kubelet[2553]: E0905 00:06:21.308539 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"70048a66-95e5-4bf3-806b-b05cba03386e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8skvt" podUID="70048a66-95e5-4bf3-806b-b05cba03386e" Sep 5 00:06:21.309748 containerd[1469]: time="2025-09-05T00:06:21.309198379Z" level=error msg="StopPodSandbox for \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\" failed" error="failed to destroy network for sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:21.309825 kubelet[2553]: E0905 00:06:21.309670 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:21.309825 kubelet[2553]: E0905 00:06:21.309743 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145"} Sep 5 00:06:21.309825 kubelet[2553]: E0905 00:06:21.309769 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eea7809f-54a5-40ed-94cd-7efa0cc56047\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:21.309825 kubelet[2553]: E0905 00:06:21.309803 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eea7809f-54a5-40ed-94cd-7efa0cc56047\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-579d867b4c-lvxcz" podUID="eea7809f-54a5-40ed-94cd-7efa0cc56047" Sep 5 00:06:21.312831 containerd[1469]: time="2025-09-05T00:06:21.312790540Z" level=error msg="StopPodSandbox for \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\" failed" error="failed to destroy network for sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:21.313840 kubelet[2553]: E0905 00:06:21.313813 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:21.313889 kubelet[2553]: E0905 00:06:21.313844 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f"} Sep 5 00:06:21.313889 kubelet[2553]: E0905 00:06:21.313865 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff309b76-ee57-400d-897b-26dbc2ef6eeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:21.313889 kubelet[2553]: E0905 00:06:21.313881 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff309b76-ee57-400d-897b-26dbc2ef6eeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59b7b569f5-wfvvx" podUID="ff309b76-ee57-400d-897b-26dbc2ef6eeb" Sep 5 00:06:21.317552 containerd[1469]: time="2025-09-05T00:06:21.317449996Z" level=error msg="StopPodSandbox for \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\" failed" error="failed to destroy network for sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:21.317921 kubelet[2553]: E0905 00:06:21.317793 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:21.318179 kubelet[2553]: E0905 00:06:21.317899 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19"} Sep 5 00:06:21.318179 kubelet[2553]: E0905 00:06:21.318064 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:21.318179 kubelet[2553]: E0905 00:06:21.318126 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-579d867b4c-ljpgm" podUID="28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148" Sep 5 00:06:21.319261 containerd[1469]: time="2025-09-05T00:06:21.319215344Z" level=error msg="StopPodSandbox for \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\" failed" error="failed to destroy network for sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:21.319506 kubelet[2553]: E0905 00:06:21.319408 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:21.319506 kubelet[2553]: E0905 00:06:21.319445 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e"} Sep 5 00:06:21.319506 kubelet[2553]: E0905 00:06:21.319474 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"43623a80-72d4-46e0-adcc-392202d1d1f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:21.319506 kubelet[2553]: E0905 00:06:21.319499 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"43623a80-72d4-46e0-adcc-392202d1d1f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56d8f46b5d-7zqzw" podUID="43623a80-72d4-46e0-adcc-392202d1d1f2" Sep 5 00:06:21.321810 containerd[1469]: time="2025-09-05T00:06:21.321772423Z" level=error msg="StopPodSandbox for \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\" failed" error="failed to destroy network for sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:21.321951 kubelet[2553]: E0905 00:06:21.321916 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:21.322043 kubelet[2553]: E0905 00:06:21.321954 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743"} Sep 5 00:06:21.322043 kubelet[2553]: E0905 00:06:21.322001 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b658ca0f-5303-457c-9cf1-6dddd6c1387f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:21.322043 kubelet[2553]: E0905 00:06:21.322028 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b658ca0f-5303-457c-9cf1-6dddd6c1387f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-jxkr8" podUID="b658ca0f-5303-457c-9cf1-6dddd6c1387f" Sep 5 00:06:21.332690 containerd[1469]: time="2025-09-05T00:06:21.332626561Z" level=error msg="StopPodSandbox for \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\" failed" error="failed to destroy network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:21.332989 kubelet[2553]: E0905 00:06:21.332933 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:21.333050 kubelet[2553]: E0905 00:06:21.333024 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901"} Sep 5 00:06:21.333096 kubelet[2553]: E0905 00:06:21.333073 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"977e9373-23c2-4b46-9d36-9bf58abbfad5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:21.333188 kubelet[2553]: E0905 00:06:21.333125 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"977e9373-23c2-4b46-9d36-9bf58abbfad5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s9hbl" podUID="977e9373-23c2-4b46-9d36-9bf58abbfad5" Sep 5 00:06:21.612919 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3-shm.mount: Deactivated successfully. Sep 5 00:06:21.613081 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7-shm.mount: Deactivated successfully. Sep 5 00:06:31.608382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount29180558.mount: Deactivated successfully. Sep 5 00:06:33.058515 containerd[1469]: time="2025-09-05T00:06:33.058454026Z" level=info msg="StopPodSandbox for \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\"" Sep 5 00:06:33.589671 containerd[1469]: time="2025-09-05T00:06:33.589590247Z" level=error msg="StopPodSandbox for \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\" failed" error="failed to destroy network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 00:06:33.590214 kubelet[2553]: E0905 00:06:33.590142 2553 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:33.591716 kubelet[2553]: E0905 00:06:33.590234 2553 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901"} Sep 5 00:06:33.591716 kubelet[2553]: E0905 00:06:33.590300 2553 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"977e9373-23c2-4b46-9d36-9bf58abbfad5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 5 00:06:33.591716 kubelet[2553]: E0905 00:06:33.590336 2553 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"977e9373-23c2-4b46-9d36-9bf58abbfad5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s9hbl" podUID="977e9373-23c2-4b46-9d36-9bf58abbfad5" Sep 5 00:06:33.596772 containerd[1469]: time="2025-09-05T00:06:33.596719764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:33.597999 containerd[1469]: time="2025-09-05T00:06:33.597931475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 5 00:06:33.599610 containerd[1469]: time="2025-09-05T00:06:33.599585587Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:33.602373 containerd[1469]: time="2025-09-05T00:06:33.602325870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 12.375276453s" Sep 5 00:06:33.602373 containerd[1469]: time="2025-09-05T00:06:33.602369798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 5 00:06:33.613088 containerd[1469]: time="2025-09-05T00:06:33.612499936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:33.615642 containerd[1469]: time="2025-09-05T00:06:33.615605365Z" level=info msg="CreateContainer within sandbox \"83801920e8716f9629854fbd95de95cb688955564d17801addc3850cee2158e1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 00:06:33.646047 containerd[1469]: time="2025-09-05T00:06:33.645998073Z" level=info msg="CreateContainer within sandbox \"83801920e8716f9629854fbd95de95cb688955564d17801addc3850cee2158e1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4f778e99fe120d02e26ae15840f3a0f8cf77b568992760501405a7b0f99be8e9\"" Sep 5 00:06:33.646622 containerd[1469]: time="2025-09-05T00:06:33.646583739Z" level=info msg="StartContainer for \"4f778e99fe120d02e26ae15840f3a0f8cf77b568992760501405a7b0f99be8e9\"" Sep 5 00:06:33.699121 systemd[1]: Started cri-containerd-4f778e99fe120d02e26ae15840f3a0f8cf77b568992760501405a7b0f99be8e9.scope - libcontainer container 4f778e99fe120d02e26ae15840f3a0f8cf77b568992760501405a7b0f99be8e9. Sep 5 00:06:33.733925 containerd[1469]: time="2025-09-05T00:06:33.733878918Z" level=info msg="StartContainer for \"4f778e99fe120d02e26ae15840f3a0f8cf77b568992760501405a7b0f99be8e9\" returns successfully" Sep 5 00:06:33.843032 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 00:06:33.844047 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 00:06:33.977860 containerd[1469]: time="2025-09-05T00:06:33.977412757Z" level=info msg="StopPodSandbox for \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\"" Sep 5 00:06:34.060254 containerd[1469]: time="2025-09-05T00:06:34.060204762Z" level=info msg="StopPodSandbox for \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\"" Sep 5 00:06:34.062094 containerd[1469]: time="2025-09-05T00:06:34.061133549Z" level=info msg="StopPodSandbox for \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\"" Sep 5 00:06:34.062094 containerd[1469]: time="2025-09-05T00:06:34.061192877Z" level=info msg="StopPodSandbox for \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\"" Sep 5 00:06:34.100589 systemd[1]: Started sshd@9-10.0.0.14:22-10.0.0.1:43706.service - OpenSSH per-connection server daemon (10.0.0.1:43706). Sep 5 00:06:34.159346 sshd[3973]: Accepted publickey for core from 10.0.0.1 port 43706 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:34.162325 sshd[3973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:34.169571 systemd-logind[1453]: New session 10 of user core. Sep 5 00:06:34.175264 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.156 [INFO][3962] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.157 [INFO][3962] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" iface="eth0" netns="/var/run/netns/cni-617b2a31-e140-665f-e025-03201689ffeb" Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.157 [INFO][3962] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" iface="eth0" netns="/var/run/netns/cni-617b2a31-e140-665f-e025-03201689ffeb" Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.157 [INFO][3962] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" iface="eth0" netns="/var/run/netns/cni-617b2a31-e140-665f-e025-03201689ffeb" Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.158 [INFO][3962] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.158 [INFO][3962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.219 [INFO][3989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" HandleID="k8s-pod-network.8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.221 [INFO][3989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.221 [INFO][3989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.232 [WARNING][3989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" HandleID="k8s-pod-network.8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.232 [INFO][3989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" HandleID="k8s-pod-network.8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.238 [INFO][3989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:34.260227 containerd[1469]: 2025-09-05 00:06:34.243 [INFO][3962] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:34.260775 containerd[1469]: time="2025-09-05T00:06:34.260736195Z" level=info msg="TearDown network for sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\" successfully" Sep 5 00:06:34.262081 containerd[1469]: time="2025-09-05T00:06:34.260768318Z" level=info msg="StopPodSandbox for \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\" returns successfully" Sep 5 00:06:34.274021 containerd[1469]: time="2025-09-05T00:06:34.273873964Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579d867b4c-ljpgm,Uid:28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.043 [INFO][3908] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.044 [INFO][3908] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" iface="eth0" netns="/var/run/netns/cni-2265c34d-26eb-9049-7531-aa2c80a1d7c9" Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.044 [INFO][3908] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" iface="eth0" netns="/var/run/netns/cni-2265c34d-26eb-9049-7531-aa2c80a1d7c9" Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.044 [INFO][3908] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" iface="eth0" netns="/var/run/netns/cni-2265c34d-26eb-9049-7531-aa2c80a1d7c9" Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.044 [INFO][3908] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.044 [INFO][3908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.220 [INFO][3918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" HandleID="k8s-pod-network.a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Workload="localhost-k8s-whisker--66fd4cc6cf--ntv2j-eth0" Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.222 [INFO][3918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.238 [INFO][3918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.243 [WARNING][3918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" HandleID="k8s-pod-network.a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Workload="localhost-k8s-whisker--66fd4cc6cf--ntv2j-eth0" Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.244 [INFO][3918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" HandleID="k8s-pod-network.a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Workload="localhost-k8s-whisker--66fd4cc6cf--ntv2j-eth0" Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.245 [INFO][3918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:34.274678 containerd[1469]: 2025-09-05 00:06:34.271 [INFO][3908] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:34.275746 containerd[1469]: time="2025-09-05T00:06:34.275691529Z" level=info msg="TearDown network for sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\" successfully" Sep 5 00:06:34.275746 containerd[1469]: time="2025-09-05T00:06:34.275740055Z" level=info msg="StopPodSandbox for \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\" returns successfully" Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.140 [INFO][3961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.141 [INFO][3961] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" iface="eth0" netns="/var/run/netns/cni-44ad771e-8f92-6f2f-7168-6739602abd0c" Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.141 [INFO][3961] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" iface="eth0" netns="/var/run/netns/cni-44ad771e-8f92-6f2f-7168-6739602abd0c" Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.141 [INFO][3961] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" iface="eth0" netns="/var/run/netns/cni-44ad771e-8f92-6f2f-7168-6739602abd0c" Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.141 [INFO][3961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.141 [INFO][3961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.220 [INFO][3983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" HandleID="k8s-pod-network.2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.224 [INFO][3983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.245 [INFO][3983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.255 [WARNING][3983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" HandleID="k8s-pod-network.2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.255 [INFO][3983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" HandleID="k8s-pod-network.2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.257 [INFO][3983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:34.282352 containerd[1469]: 2025-09-05 00:06:34.274 [INFO][3961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:34.284720 containerd[1469]: time="2025-09-05T00:06:34.284686087Z" level=info msg="TearDown network for sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\" successfully" Sep 5 00:06:34.284720 containerd[1469]: time="2025-09-05T00:06:34.284715806Z" level=info msg="StopPodSandbox for \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\" returns successfully" Sep 5 00:06:34.285893 containerd[1469]: time="2025-09-05T00:06:34.285866725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-jxkr8,Uid:b658ca0f-5303-457c-9cf1-6dddd6c1387f,Namespace:calico-system,Attempt:1,}" Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.162 [INFO][3956] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.162 [INFO][3956] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" iface="eth0" netns="/var/run/netns/cni-e55108a6-4e50-8d34-02ea-695804c95bd5" Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.162 [INFO][3956] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" iface="eth0" netns="/var/run/netns/cni-e55108a6-4e50-8d34-02ea-695804c95bd5" Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.162 [INFO][3956] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" iface="eth0" netns="/var/run/netns/cni-e55108a6-4e50-8d34-02ea-695804c95bd5" Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.162 [INFO][3956] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.162 [INFO][3956] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.261 [INFO][3991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" HandleID="k8s-pod-network.6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.261 [INFO][3991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.261 [INFO][3991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.273 [WARNING][3991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" HandleID="k8s-pod-network.6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.274 [INFO][3991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" HandleID="k8s-pod-network.6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.287 [INFO][3991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:34.295551 containerd[1469]: 2025-09-05 00:06:34.291 [INFO][3956] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:34.295966 containerd[1469]: time="2025-09-05T00:06:34.295679401Z" level=info msg="TearDown network for sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\" successfully" Sep 5 00:06:34.295966 containerd[1469]: time="2025-09-05T00:06:34.295705603Z" level=info msg="StopPodSandbox for \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\" returns successfully" Sep 5 00:06:34.296766 containerd[1469]: time="2025-09-05T00:06:34.296727556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8f46b5d-7zqzw,Uid:43623a80-72d4-46e0-adcc-392202d1d1f2,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:06:34.352638 kubelet[2553]: I0905 00:06:34.351094 2553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5af96d62-36e9-4622-9b9f-c02e6f235a53-whisker-ca-bundle\") pod \"5af96d62-36e9-4622-9b9f-c02e6f235a53\" (UID: \"5af96d62-36e9-4622-9b9f-c02e6f235a53\") " Sep 5 00:06:34.352638 kubelet[2553]: I0905 00:06:34.351186 2553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5z9sz\" (UniqueName: \"kubernetes.io/projected/5af96d62-36e9-4622-9b9f-c02e6f235a53-kube-api-access-5z9sz\") pod \"5af96d62-36e9-4622-9b9f-c02e6f235a53\" (UID: \"5af96d62-36e9-4622-9b9f-c02e6f235a53\") " Sep 5 00:06:34.352638 kubelet[2553]: I0905 00:06:34.351213 2553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5af96d62-36e9-4622-9b9f-c02e6f235a53-whisker-backend-key-pair\") pod \"5af96d62-36e9-4622-9b9f-c02e6f235a53\" (UID: \"5af96d62-36e9-4622-9b9f-c02e6f235a53\") " Sep 5 00:06:34.352638 kubelet[2553]: I0905 00:06:34.352425 2553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5af96d62-36e9-4622-9b9f-c02e6f235a53-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5af96d62-36e9-4622-9b9f-c02e6f235a53" (UID: "5af96d62-36e9-4622-9b9f-c02e6f235a53"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 5 00:06:34.360065 kubelet[2553]: I0905 00:06:34.359840 2553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5af96d62-36e9-4622-9b9f-c02e6f235a53-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5af96d62-36e9-4622-9b9f-c02e6f235a53" (UID: "5af96d62-36e9-4622-9b9f-c02e6f235a53"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 00:06:34.360065 kubelet[2553]: I0905 00:06:34.360059 2553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5af96d62-36e9-4622-9b9f-c02e6f235a53-kube-api-access-5z9sz" (OuterVolumeSpecName: "kube-api-access-5z9sz") pod "5af96d62-36e9-4622-9b9f-c02e6f235a53" (UID: "5af96d62-36e9-4622-9b9f-c02e6f235a53"). InnerVolumeSpecName "kube-api-access-5z9sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:06:34.378671 sshd[3973]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:34.384809 systemd[1]: sshd@9-10.0.0.14:22-10.0.0.1:43706.service: Deactivated successfully. Sep 5 00:06:34.387283 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 00:06:34.389678 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Sep 5 00:06:34.390756 systemd-logind[1453]: Removed session 10. Sep 5 00:06:34.449032 systemd-networkd[1402]: calieddfabaf632: Link UP Sep 5 00:06:34.449271 systemd-networkd[1402]: calieddfabaf632: Gained carrier Sep 5 00:06:34.452785 kubelet[2553]: I0905 00:06:34.452472 2553 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5af96d62-36e9-4622-9b9f-c02e6f235a53-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 00:06:34.452785 kubelet[2553]: I0905 00:06:34.452506 2553 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5af96d62-36e9-4622-9b9f-c02e6f235a53-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 00:06:34.452785 kubelet[2553]: I0905 00:06:34.452515 2553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5z9sz\" (UniqueName: \"kubernetes.io/projected/5af96d62-36e9-4622-9b9f-c02e6f235a53-kube-api-access-5z9sz\") on node \"localhost\" DevicePath \"\"" Sep 5 00:06:34.463837 kubelet[2553]: I0905 00:06:34.461291 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fclvv" podStartSLOduration=2.45464043 podStartE2EDuration="27.461115114s" podCreationTimestamp="2025-09-05 00:06:07 +0000 UTC" firstStartedPulling="2025-09-05 00:06:08.596635646 +0000 UTC m=+20.673597428" lastFinishedPulling="2025-09-05 00:06:33.60311033 +0000 UTC m=+45.680072112" observedRunningTime="2025-09-05 00:06:34.29995267 +0000 UTC m=+46.376914452" watchObservedRunningTime="2025-09-05 00:06:34.461115114 +0000 UTC m=+46.538076896" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.332 [INFO][4031] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.342 [INFO][4031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0 calico-apiserver-579d867b4c- calico-apiserver 28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148 946 0 2025-09-05 00:06:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:579d867b4c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-579d867b4c-ljpgm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calieddfabaf632 [] [] }} ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-ljpgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.343 [INFO][4031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-ljpgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.392 [INFO][4059] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" HandleID="k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.392 [INFO][4059] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" HandleID="k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7e40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-579d867b4c-ljpgm", "timestamp":"2025-09-05 00:06:34.392386385 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.392 [INFO][4059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.393 [INFO][4059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.393 [INFO][4059] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.399 [INFO][4059] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" host="localhost" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.407 [INFO][4059] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.411 [INFO][4059] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.412 [INFO][4059] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.413 [INFO][4059] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.414 [INFO][4059] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" host="localhost" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.415 [INFO][4059] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.422 [INFO][4059] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" host="localhost" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.430 [INFO][4059] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" host="localhost" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.430 [INFO][4059] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" host="localhost" Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.430 [INFO][4059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:34.466228 containerd[1469]: 2025-09-05 00:06:34.431 [INFO][4059] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" HandleID="k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:34.466828 containerd[1469]: 2025-09-05 00:06:34.440 [INFO][4031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-ljpgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0", GenerateName:"calico-apiserver-579d867b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579d867b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-579d867b4c-ljpgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieddfabaf632", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:34.466828 containerd[1469]: 2025-09-05 00:06:34.440 [INFO][4031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-ljpgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:34.466828 containerd[1469]: 2025-09-05 00:06:34.440 [INFO][4031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieddfabaf632 ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-ljpgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:34.466828 containerd[1469]: 2025-09-05 00:06:34.449 [INFO][4031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-ljpgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:34.466828 containerd[1469]: 2025-09-05 00:06:34.449 [INFO][4031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-ljpgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0", GenerateName:"calico-apiserver-579d867b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579d867b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d", Pod:"calico-apiserver-579d867b4c-ljpgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieddfabaf632", MAC:"02:c9:ad:eb:48:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:34.466828 containerd[1469]: 2025-09-05 00:06:34.460 [INFO][4031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-ljpgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:34.499513 containerd[1469]: time="2025-09-05T00:06:34.498932687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:34.499513 containerd[1469]: time="2025-09-05T00:06:34.499010742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:34.499513 containerd[1469]: time="2025-09-05T00:06:34.499022555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:34.499513 containerd[1469]: time="2025-09-05T00:06:34.499111653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:34.519148 systemd[1]: Started cri-containerd-5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d.scope - libcontainer container 5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d. Sep 5 00:06:34.533709 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:06:34.544851 systemd-networkd[1402]: calia27ebc39e5f: Link UP Sep 5 00:06:34.546420 systemd-networkd[1402]: calia27ebc39e5f: Gained carrier Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.398 [INFO][4055] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.414 [INFO][4055] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0 calico-apiserver-56d8f46b5d- calico-apiserver 43623a80-72d4-46e0-adcc-392202d1d1f2 947 0 2025-09-05 00:06:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56d8f46b5d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-56d8f46b5d-7zqzw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia27ebc39e5f [] [] }} ContainerID="c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" Namespace="calico-apiserver" Pod="calico-apiserver-56d8f46b5d-7zqzw" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.415 [INFO][4055] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" Namespace="calico-apiserver" Pod="calico-apiserver-56d8f46b5d-7zqzw" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.451 [INFO][4108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" HandleID="k8s-pod-network.c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.451 [INFO][4108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" HandleID="k8s-pod-network.c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f5f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-56d8f46b5d-7zqzw", "timestamp":"2025-09-05 00:06:34.451476114 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.452 [INFO][4108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.452 [INFO][4108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.452 [INFO][4108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.500 [INFO][4108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" host="localhost" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.508 [INFO][4108] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.513 [INFO][4108] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.515 [INFO][4108] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.517 [INFO][4108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.517 [INFO][4108] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" host="localhost" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.518 [INFO][4108] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.527 [INFO][4108] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" host="localhost" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.535 [INFO][4108] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" host="localhost" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.535 [INFO][4108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" host="localhost" Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.535 [INFO][4108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:34.562794 containerd[1469]: 2025-09-05 00:06:34.535 [INFO][4108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" HandleID="k8s-pod-network.c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:34.563432 containerd[1469]: 2025-09-05 00:06:34.539 [INFO][4055] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" Namespace="calico-apiserver" Pod="calico-apiserver-56d8f46b5d-7zqzw" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0", GenerateName:"calico-apiserver-56d8f46b5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"43623a80-72d4-46e0-adcc-392202d1d1f2", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d8f46b5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-56d8f46b5d-7zqzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia27ebc39e5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:34.563432 containerd[1469]: 2025-09-05 00:06:34.539 [INFO][4055] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" Namespace="calico-apiserver" Pod="calico-apiserver-56d8f46b5d-7zqzw" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:34.563432 containerd[1469]: 2025-09-05 00:06:34.539 [INFO][4055] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia27ebc39e5f ContainerID="c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" Namespace="calico-apiserver" Pod="calico-apiserver-56d8f46b5d-7zqzw" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:34.563432 containerd[1469]: 2025-09-05 00:06:34.546 [INFO][4055] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" Namespace="calico-apiserver" Pod="calico-apiserver-56d8f46b5d-7zqzw" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:34.563432 containerd[1469]: 2025-09-05 00:06:34.547 [INFO][4055] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" Namespace="calico-apiserver" Pod="calico-apiserver-56d8f46b5d-7zqzw" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0", GenerateName:"calico-apiserver-56d8f46b5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"43623a80-72d4-46e0-adcc-392202d1d1f2", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d8f46b5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd", Pod:"calico-apiserver-56d8f46b5d-7zqzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia27ebc39e5f", MAC:"d6:83:aa:ab:c7:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:34.563432 containerd[1469]: 2025-09-05 00:06:34.559 [INFO][4055] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd" Namespace="calico-apiserver" Pod="calico-apiserver-56d8f46b5d-7zqzw" WorkloadEndpoint="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:34.563432 containerd[1469]: time="2025-09-05T00:06:34.562737314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579d867b4c-ljpgm,Uid:28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d\"" Sep 5 00:06:34.565540 containerd[1469]: time="2025-09-05T00:06:34.565512102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:06:34.585644 containerd[1469]: time="2025-09-05T00:06:34.585347561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:34.585644 containerd[1469]: time="2025-09-05T00:06:34.585455937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:34.585644 containerd[1469]: time="2025-09-05T00:06:34.585470115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:34.585644 containerd[1469]: time="2025-09-05T00:06:34.585578231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:34.604157 systemd[1]: Started cri-containerd-c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd.scope - libcontainer container c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd. Sep 5 00:06:34.617404 systemd[1]: run-netns-cni\x2d617b2a31\x2de140\x2d665f\x2de025\x2d03201689ffeb.mount: Deactivated successfully. Sep 5 00:06:34.617782 systemd[1]: run-netns-cni\x2d44ad771e\x2d8f92\x2d6f2f\x2d7168\x2d6739602abd0c.mount: Deactivated successfully. Sep 5 00:06:34.617881 systemd[1]: run-netns-cni\x2de55108a6\x2d4e50\x2d8d34\x2d02ea\x2d695804c95bd5.mount: Deactivated successfully. Sep 5 00:06:34.617962 systemd[1]: run-netns-cni\x2d2265c34d\x2d26eb\x2d9049\x2d7531\x2daa2c80a1d7c9.mount: Deactivated successfully. Sep 5 00:06:34.618133 systemd[1]: var-lib-kubelet-pods-5af96d62\x2d36e9\x2d4622\x2d9b9f\x2dc02e6f235a53-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5z9sz.mount: Deactivated successfully. Sep 5 00:06:34.618234 systemd[1]: var-lib-kubelet-pods-5af96d62\x2d36e9\x2d4622\x2d9b9f\x2dc02e6f235a53-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 00:06:34.626914 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:06:34.642123 systemd-networkd[1402]: califfb0465f1fa: Link UP Sep 5 00:06:34.649318 systemd-networkd[1402]: califfb0465f1fa: Gained carrier Sep 5 00:06:34.663640 containerd[1469]: time="2025-09-05T00:06:34.663600003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56d8f46b5d-7zqzw,Uid:43623a80-72d4-46e0-adcc-392202d1d1f2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd\"" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.417 [INFO][4043] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.434 [INFO][4043] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7988f88666--jxkr8-eth0 goldmane-7988f88666- calico-system b658ca0f-5303-457c-9cf1-6dddd6c1387f 945 0 2025-09-05 00:06:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7988f88666-jxkr8 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] califfb0465f1fa [] [] }} ContainerID="43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" Namespace="calico-system" Pod="goldmane-7988f88666-jxkr8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jxkr8-" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.434 [INFO][4043] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" Namespace="calico-system" Pod="goldmane-7988f88666-jxkr8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.483 [INFO][4114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" HandleID="k8s-pod-network.43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.484 [INFO][4114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" HandleID="k8s-pod-network.43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000196770), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7988f88666-jxkr8", "timestamp":"2025-09-05 00:06:34.483773067 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.484 [INFO][4114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.536 [INFO][4114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.536 [INFO][4114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.600 [INFO][4114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" host="localhost" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.614 [INFO][4114] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.621 [INFO][4114] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.622 [INFO][4114] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.624 [INFO][4114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.624 [INFO][4114] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" host="localhost" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.625 [INFO][4114] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163 Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.629 [INFO][4114] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" host="localhost" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.634 [INFO][4114] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" host="localhost" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.634 [INFO][4114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" host="localhost" Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.634 [INFO][4114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:34.664530 containerd[1469]: 2025-09-05 00:06:34.634 [INFO][4114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" HandleID="k8s-pod-network.43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:34.665019 containerd[1469]: 2025-09-05 00:06:34.638 [INFO][4043] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" Namespace="calico-system" Pod="goldmane-7988f88666-jxkr8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--jxkr8-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b658ca0f-5303-457c-9cf1-6dddd6c1387f", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7988f88666-jxkr8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califfb0465f1fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:34.665019 containerd[1469]: 2025-09-05 00:06:34.638 [INFO][4043] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" Namespace="calico-system" Pod="goldmane-7988f88666-jxkr8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:34.665019 containerd[1469]: 2025-09-05 00:06:34.639 [INFO][4043] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfb0465f1fa ContainerID="43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" Namespace="calico-system" Pod="goldmane-7988f88666-jxkr8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:34.665019 containerd[1469]: 2025-09-05 00:06:34.650 [INFO][4043] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" Namespace="calico-system" Pod="goldmane-7988f88666-jxkr8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:34.665019 containerd[1469]: 2025-09-05 00:06:34.651 [INFO][4043] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" Namespace="calico-system" Pod="goldmane-7988f88666-jxkr8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--jxkr8-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b658ca0f-5303-457c-9cf1-6dddd6c1387f", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163", Pod:"goldmane-7988f88666-jxkr8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califfb0465f1fa", MAC:"86:81:03:5f:f2:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:34.665019 containerd[1469]: 2025-09-05 00:06:34.660 [INFO][4043] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163" Namespace="calico-system" Pod="goldmane-7988f88666-jxkr8" WorkloadEndpoint="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:34.682331 containerd[1469]: time="2025-09-05T00:06:34.682245916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:34.682331 containerd[1469]: time="2025-09-05T00:06:34.682299022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:34.682331 containerd[1469]: time="2025-09-05T00:06:34.682321546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:34.682529 containerd[1469]: time="2025-09-05T00:06:34.682409842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:34.706139 systemd[1]: Started cri-containerd-43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163.scope - libcontainer container 43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163. Sep 5 00:06:34.719357 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:06:34.743173 containerd[1469]: time="2025-09-05T00:06:34.743117188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-jxkr8,Uid:b658ca0f-5303-457c-9cf1-6dddd6c1387f,Namespace:calico-system,Attempt:1,} returns sandbox id \"43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163\"" Sep 5 00:06:35.057934 containerd[1469]: time="2025-09-05T00:06:35.057887766Z" level=info msg="StopPodSandbox for \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\"" Sep 5 00:06:35.058276 containerd[1469]: time="2025-09-05T00:06:35.058241078Z" level=info msg="StopPodSandbox for \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\"" Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.108 [INFO][4294] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.108 [INFO][4294] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" iface="eth0" netns="/var/run/netns/cni-f3270095-e977-e012-02c2-1b2150ad4e27" Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.108 [INFO][4294] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" iface="eth0" netns="/var/run/netns/cni-f3270095-e977-e012-02c2-1b2150ad4e27" Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.108 [INFO][4294] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" iface="eth0" netns="/var/run/netns/cni-f3270095-e977-e012-02c2-1b2150ad4e27" Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.108 [INFO][4294] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.108 [INFO][4294] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.140 [INFO][4308] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" HandleID="k8s-pod-network.02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.140 [INFO][4308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.140 [INFO][4308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.146 [WARNING][4308] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" HandleID="k8s-pod-network.02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.146 [INFO][4308] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" HandleID="k8s-pod-network.02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.147 [INFO][4308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:35.152824 containerd[1469]: 2025-09-05 00:06:35.150 [INFO][4294] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:35.154075 containerd[1469]: time="2025-09-05T00:06:35.153177856Z" level=info msg="TearDown network for sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\" successfully" Sep 5 00:06:35.154075 containerd[1469]: time="2025-09-05T00:06:35.153248806Z" level=info msg="StopPodSandbox for \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\" returns successfully" Sep 5 00:06:35.154267 containerd[1469]: time="2025-09-05T00:06:35.154236550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b7b569f5-wfvvx,Uid:ff309b76-ee57-400d-897b-26dbc2ef6eeb,Namespace:calico-system,Attempt:1,}" Sep 5 00:06:35.156400 systemd[1]: run-netns-cni\x2df3270095\x2de977\x2de012\x2d02c2\x2d1b2150ad4e27.mount: Deactivated successfully. Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.114 [INFO][4293] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.114 [INFO][4293] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" iface="eth0" netns="/var/run/netns/cni-f4517934-2cf5-400b-8763-6e3d6f8b1365" Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.116 [INFO][4293] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" iface="eth0" netns="/var/run/netns/cni-f4517934-2cf5-400b-8763-6e3d6f8b1365" Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.116 [INFO][4293] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" iface="eth0" netns="/var/run/netns/cni-f4517934-2cf5-400b-8763-6e3d6f8b1365" Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.116 [INFO][4293] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.116 [INFO][4293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.141 [INFO][4316] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" HandleID="k8s-pod-network.6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.141 [INFO][4316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.147 [INFO][4316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.152 [WARNING][4316] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" HandleID="k8s-pod-network.6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.152 [INFO][4316] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" HandleID="k8s-pod-network.6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.156 [INFO][4316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:35.163868 containerd[1469]: 2025-09-05 00:06:35.160 [INFO][4293] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:35.164325 containerd[1469]: time="2025-09-05T00:06:35.164054446Z" level=info msg="TearDown network for sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\" successfully" Sep 5 00:06:35.164325 containerd[1469]: time="2025-09-05T00:06:35.164082031Z" level=info msg="StopPodSandbox for \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\" returns successfully" Sep 5 00:06:35.164495 kubelet[2553]: E0905 00:06:35.164456 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:35.165262 containerd[1469]: time="2025-09-05T00:06:35.165229542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8skvt,Uid:70048a66-95e5-4bf3-806b-b05cba03386e,Namespace:kube-system,Attempt:1,}" Sep 5 00:06:35.284164 systemd-networkd[1402]: cali7c502d625f3: Link UP Sep 5 00:06:35.287322 systemd-networkd[1402]: cali7c502d625f3: Gained carrier Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.194 [INFO][4325] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.206 [INFO][4325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0 calico-kube-controllers-59b7b569f5- calico-system ff309b76-ee57-400d-897b-26dbc2ef6eeb 974 0 2025-09-05 00:06:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59b7b569f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-59b7b569f5-wfvvx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7c502d625f3 [] [] }} ContainerID="068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" Namespace="calico-system" Pod="calico-kube-controllers-59b7b569f5-wfvvx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.206 [INFO][4325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" Namespace="calico-system" Pod="calico-kube-controllers-59b7b569f5-wfvvx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.239 [INFO][4351] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" HandleID="k8s-pod-network.068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.239 [INFO][4351] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" HandleID="k8s-pod-network.068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-59b7b569f5-wfvvx", "timestamp":"2025-09-05 00:06:35.239644095 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.240 [INFO][4351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.240 [INFO][4351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.240 [INFO][4351] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.246 [INFO][4351] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" host="localhost" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.251 [INFO][4351] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.254 [INFO][4351] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.256 [INFO][4351] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.259 [INFO][4351] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.259 [INFO][4351] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" host="localhost" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.261 [INFO][4351] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1 Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.265 [INFO][4351] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" host="localhost" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.271 [INFO][4351] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" host="localhost" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.272 [INFO][4351] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" host="localhost" Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.272 [INFO][4351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:35.303874 containerd[1469]: 2025-09-05 00:06:35.272 [INFO][4351] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" HandleID="k8s-pod-network.068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:35.304662 containerd[1469]: 2025-09-05 00:06:35.276 [INFO][4325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" Namespace="calico-system" Pod="calico-kube-controllers-59b7b569f5-wfvvx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0", GenerateName:"calico-kube-controllers-59b7b569f5-", Namespace:"calico-system", SelfLink:"", UID:"ff309b76-ee57-400d-897b-26dbc2ef6eeb", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b7b569f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-59b7b569f5-wfvvx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c502d625f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:35.304662 containerd[1469]: 2025-09-05 00:06:35.276 [INFO][4325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" Namespace="calico-system" Pod="calico-kube-controllers-59b7b569f5-wfvvx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:35.304662 containerd[1469]: 2025-09-05 00:06:35.276 [INFO][4325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c502d625f3 ContainerID="068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" Namespace="calico-system" Pod="calico-kube-controllers-59b7b569f5-wfvvx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:35.304662 containerd[1469]: 2025-09-05 00:06:35.289 [INFO][4325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" Namespace="calico-system" Pod="calico-kube-controllers-59b7b569f5-wfvvx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:35.304662 containerd[1469]: 2025-09-05 00:06:35.289 [INFO][4325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" Namespace="calico-system" Pod="calico-kube-controllers-59b7b569f5-wfvvx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0", GenerateName:"calico-kube-controllers-59b7b569f5-", Namespace:"calico-system", SelfLink:"", UID:"ff309b76-ee57-400d-897b-26dbc2ef6eeb", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b7b569f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1", Pod:"calico-kube-controllers-59b7b569f5-wfvvx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c502d625f3", MAC:"a2:ad:34:d1:1e:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:35.304662 containerd[1469]: 2025-09-05 00:06:35.297 [INFO][4325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1" Namespace="calico-system" Pod="calico-kube-controllers-59b7b569f5-wfvvx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:35.306779 systemd[1]: Removed slice kubepods-besteffort-pod5af96d62_36e9_4622_9b9f_c02e6f235a53.slice - libcontainer container kubepods-besteffort-pod5af96d62_36e9_4622_9b9f_c02e6f235a53.slice. Sep 5 00:06:35.360097 kubelet[2553]: I0905 00:06:35.358796 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/110ee3f8-ed5b-42b3-b27e-221c8101e95f-whisker-backend-key-pair\") pod \"whisker-665795d94b-nh8zn\" (UID: \"110ee3f8-ed5b-42b3-b27e-221c8101e95f\") " pod="calico-system/whisker-665795d94b-nh8zn" Sep 5 00:06:35.360097 kubelet[2553]: I0905 00:06:35.358829 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/110ee3f8-ed5b-42b3-b27e-221c8101e95f-whisker-ca-bundle\") pod \"whisker-665795d94b-nh8zn\" (UID: \"110ee3f8-ed5b-42b3-b27e-221c8101e95f\") " pod="calico-system/whisker-665795d94b-nh8zn" Sep 5 00:06:35.360097 kubelet[2553]: I0905 00:06:35.358869 2553 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfnbc\" (UniqueName: \"kubernetes.io/projected/110ee3f8-ed5b-42b3-b27e-221c8101e95f-kube-api-access-bfnbc\") pod \"whisker-665795d94b-nh8zn\" (UID: \"110ee3f8-ed5b-42b3-b27e-221c8101e95f\") " pod="calico-system/whisker-665795d94b-nh8zn" Sep 5 00:06:35.369078 containerd[1469]: time="2025-09-05T00:06:35.368718351Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:35.369078 containerd[1469]: time="2025-09-05T00:06:35.368812609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:35.369078 containerd[1469]: time="2025-09-05T00:06:35.368823520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:35.370023 containerd[1469]: time="2025-09-05T00:06:35.369076924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:35.376737 systemd[1]: Created slice kubepods-besteffort-pod110ee3f8_ed5b_42b3_b27e_221c8101e95f.slice - libcontainer container kubepods-besteffort-pod110ee3f8_ed5b_42b3_b27e_221c8101e95f.slice. Sep 5 00:06:35.405996 systemd[1]: Started cri-containerd-068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1.scope - libcontainer container 068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1. Sep 5 00:06:35.430803 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:06:35.437960 systemd-networkd[1402]: cali5ec3839a929: Link UP Sep 5 00:06:35.438207 systemd-networkd[1402]: cali5ec3839a929: Gained carrier Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.228 [INFO][4344] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.244 [INFO][4344] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0 coredns-7c65d6cfc9- kube-system 70048a66-95e5-4bf3-806b-b05cba03386e 976 0 2025-09-05 00:05:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-8skvt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ec3839a929 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8skvt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8skvt-" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.244 [INFO][4344] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8skvt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.276 [INFO][4362] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" HandleID="k8s-pod-network.1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.277 [INFO][4362] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" HandleID="k8s-pod-network.1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7270), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-8skvt", "timestamp":"2025-09-05 00:06:35.276898409 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.277 [INFO][4362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.277 [INFO][4362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.277 [INFO][4362] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.350 [INFO][4362] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" host="localhost" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.367 [INFO][4362] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.395 [INFO][4362] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.402 [INFO][4362] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.408 [INFO][4362] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.408 [INFO][4362] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" host="localhost" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.409 [INFO][4362] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2 Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.422 [INFO][4362] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" host="localhost" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.428 [INFO][4362] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" host="localhost" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.428 [INFO][4362] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" host="localhost" Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.428 [INFO][4362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:35.454222 containerd[1469]: 2025-09-05 00:06:35.428 [INFO][4362] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" HandleID="k8s-pod-network.1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:35.454879 containerd[1469]: 2025-09-05 00:06:35.432 [INFO][4344] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8skvt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"70048a66-95e5-4bf3-806b-b05cba03386e", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-8skvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ec3839a929", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:35.454879 containerd[1469]: 2025-09-05 00:06:35.433 [INFO][4344] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8skvt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:35.454879 containerd[1469]: 2025-09-05 00:06:35.433 [INFO][4344] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec3839a929 ContainerID="1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8skvt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:35.454879 containerd[1469]: 2025-09-05 00:06:35.435 [INFO][4344] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8skvt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:35.454879 containerd[1469]: 2025-09-05 00:06:35.437 [INFO][4344] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8skvt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"70048a66-95e5-4bf3-806b-b05cba03386e", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2", Pod:"coredns-7c65d6cfc9-8skvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ec3839a929", MAC:"72:76:19:43:03:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:35.454879 containerd[1469]: 2025-09-05 00:06:35.448 [INFO][4344] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8skvt" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:35.501491 containerd[1469]: time="2025-09-05T00:06:35.501409407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59b7b569f5-wfvvx,Uid:ff309b76-ee57-400d-897b-26dbc2ef6eeb,Namespace:calico-system,Attempt:1,} returns sandbox id \"068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1\"" Sep 5 00:06:35.610896 systemd-networkd[1402]: calieddfabaf632: Gained IPv6LL Sep 5 00:06:35.620960 systemd[1]: run-netns-cni\x2df4517934\x2d2cf5\x2d400b\x2d8763\x2d6e3d6f8b1365.mount: Deactivated successfully. Sep 5 00:06:35.628236 containerd[1469]: time="2025-09-05T00:06:35.628130947Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:35.628236 containerd[1469]: time="2025-09-05T00:06:35.628203873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:35.628848 containerd[1469]: time="2025-09-05T00:06:35.628215295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:35.628848 containerd[1469]: time="2025-09-05T00:06:35.628330925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:35.667159 systemd[1]: Started cri-containerd-1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2.scope - libcontainer container 1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2. Sep 5 00:06:35.684030 kernel: bpftool[4610]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 5 00:06:35.684735 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:06:35.689191 containerd[1469]: time="2025-09-05T00:06:35.689122769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-665795d94b-nh8zn,Uid:110ee3f8-ed5b-42b3-b27e-221c8101e95f,Namespace:calico-system,Attempt:0,}" Sep 5 00:06:35.723244 containerd[1469]: time="2025-09-05T00:06:35.723193816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8skvt,Uid:70048a66-95e5-4bf3-806b-b05cba03386e,Namespace:kube-system,Attempt:1,} returns sandbox id \"1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2\"" Sep 5 00:06:35.724005 kubelet[2553]: E0905 00:06:35.723961 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:35.727502 containerd[1469]: time="2025-09-05T00:06:35.727449916Z" level=info msg="CreateContainer within sandbox \"1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:06:35.759940 containerd[1469]: time="2025-09-05T00:06:35.759876715Z" level=info msg="CreateContainer within sandbox \"1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8ca642be57ee3e646839772adee04c6f857c00af4649ef1a524d64ef29311891\"" Sep 5 00:06:35.762075 containerd[1469]: time="2025-09-05T00:06:35.761743706Z" level=info msg="StartContainer for \"8ca642be57ee3e646839772adee04c6f857c00af4649ef1a524d64ef29311891\"" Sep 5 00:06:35.795129 systemd[1]: Started cri-containerd-8ca642be57ee3e646839772adee04c6f857c00af4649ef1a524d64ef29311891.scope - libcontainer container 8ca642be57ee3e646839772adee04c6f857c00af4649ef1a524d64ef29311891. Sep 5 00:06:35.855146 containerd[1469]: time="2025-09-05T00:06:35.855084671Z" level=info msg="StartContainer for \"8ca642be57ee3e646839772adee04c6f857c00af4649ef1a524d64ef29311891\" returns successfully" Sep 5 00:06:35.856091 systemd-networkd[1402]: cali66b9aef321b: Link UP Sep 5 00:06:35.856896 systemd-networkd[1402]: cali66b9aef321b: Gained carrier Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.760 [INFO][4616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--665795d94b--nh8zn-eth0 whisker-665795d94b- calico-system 110ee3f8-ed5b-42b3-b27e-221c8101e95f 994 0 2025-09-05 00:06:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:665795d94b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-665795d94b-nh8zn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali66b9aef321b [] [] }} ContainerID="3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" Namespace="calico-system" Pod="whisker-665795d94b-nh8zn" WorkloadEndpoint="localhost-k8s-whisker--665795d94b--nh8zn-" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.761 [INFO][4616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" Namespace="calico-system" Pod="whisker-665795d94b-nh8zn" WorkloadEndpoint="localhost-k8s-whisker--665795d94b--nh8zn-eth0" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.808 [INFO][4633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" HandleID="k8s-pod-network.3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" Workload="localhost-k8s-whisker--665795d94b--nh8zn-eth0" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.809 [INFO][4633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" HandleID="k8s-pod-network.3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" Workload="localhost-k8s-whisker--665795d94b--nh8zn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4e30), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-665795d94b-nh8zn", "timestamp":"2025-09-05 00:06:35.806276338 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.809 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.809 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.809 [INFO][4633] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.815 [INFO][4633] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" host="localhost" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.820 [INFO][4633] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.825 [INFO][4633] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.827 [INFO][4633] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.830 [INFO][4633] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.831 [INFO][4633] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" host="localhost" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.834 [INFO][4633] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7 Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.838 [INFO][4633] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" host="localhost" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.844 [INFO][4633] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" host="localhost" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.844 [INFO][4633] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" host="localhost" Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.844 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:35.877092 containerd[1469]: 2025-09-05 00:06:35.844 [INFO][4633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" HandleID="k8s-pod-network.3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" Workload="localhost-k8s-whisker--665795d94b--nh8zn-eth0" Sep 5 00:06:35.877792 containerd[1469]: 2025-09-05 00:06:35.850 [INFO][4616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" Namespace="calico-system" Pod="whisker-665795d94b-nh8zn" WorkloadEndpoint="localhost-k8s-whisker--665795d94b--nh8zn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--665795d94b--nh8zn-eth0", GenerateName:"whisker-665795d94b-", Namespace:"calico-system", SelfLink:"", UID:"110ee3f8-ed5b-42b3-b27e-221c8101e95f", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"665795d94b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-665795d94b-nh8zn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali66b9aef321b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:35.877792 containerd[1469]: 2025-09-05 00:06:35.850 [INFO][4616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" Namespace="calico-system" Pod="whisker-665795d94b-nh8zn" WorkloadEndpoint="localhost-k8s-whisker--665795d94b--nh8zn-eth0" Sep 5 00:06:35.877792 containerd[1469]: 2025-09-05 00:06:35.850 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66b9aef321b ContainerID="3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" Namespace="calico-system" Pod="whisker-665795d94b-nh8zn" WorkloadEndpoint="localhost-k8s-whisker--665795d94b--nh8zn-eth0" Sep 5 00:06:35.877792 containerd[1469]: 2025-09-05 00:06:35.857 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" Namespace="calico-system" Pod="whisker-665795d94b-nh8zn" WorkloadEndpoint="localhost-k8s-whisker--665795d94b--nh8zn-eth0" Sep 5 00:06:35.877792 containerd[1469]: 2025-09-05 00:06:35.857 [INFO][4616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" Namespace="calico-system" Pod="whisker-665795d94b-nh8zn" WorkloadEndpoint="localhost-k8s-whisker--665795d94b--nh8zn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--665795d94b--nh8zn-eth0", GenerateName:"whisker-665795d94b-", Namespace:"calico-system", SelfLink:"", UID:"110ee3f8-ed5b-42b3-b27e-221c8101e95f", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"665795d94b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7", Pod:"whisker-665795d94b-nh8zn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali66b9aef321b", MAC:"42:d4:cf:f4:d8:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:35.877792 containerd[1469]: 2025-09-05 00:06:35.870 [INFO][4616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7" Namespace="calico-system" Pod="whisker-665795d94b-nh8zn" WorkloadEndpoint="localhost-k8s-whisker--665795d94b--nh8zn-eth0" Sep 5 00:06:35.912542 containerd[1469]: time="2025-09-05T00:06:35.911533502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:35.912542 containerd[1469]: time="2025-09-05T00:06:35.911630063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:35.912542 containerd[1469]: time="2025-09-05T00:06:35.911646155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:35.912542 containerd[1469]: time="2025-09-05T00:06:35.912205527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:35.937327 systemd[1]: Started cri-containerd-3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7.scope - libcontainer container 3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7. Sep 5 00:06:35.963955 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:06:35.997397 containerd[1469]: time="2025-09-05T00:06:35.997182075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-665795d94b-nh8zn,Uid:110ee3f8-ed5b-42b3-b27e-221c8101e95f,Namespace:calico-system,Attempt:0,} returns sandbox id \"3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7\"" Sep 5 00:06:36.027880 systemd-networkd[1402]: vxlan.calico: Link UP Sep 5 00:06:36.027891 systemd-networkd[1402]: vxlan.calico: Gained carrier Sep 5 00:06:36.062364 containerd[1469]: time="2025-09-05T00:06:36.061798143Z" level=info msg="StopPodSandbox for \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\"" Sep 5 00:06:36.065594 containerd[1469]: time="2025-09-05T00:06:36.065037359Z" level=info msg="StopPodSandbox for \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\"" Sep 5 00:06:36.074622 kubelet[2553]: I0905 00:06:36.074585 2553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5af96d62-36e9-4622-9b9f-c02e6f235a53" path="/var/lib/kubelet/pods/5af96d62-36e9-4622-9b9f-c02e6f235a53/volumes" Sep 5 00:06:36.184327 systemd-networkd[1402]: califfb0465f1fa: Gained IPv6LL Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.146 [INFO][4783] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.146 [INFO][4783] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" iface="eth0" netns="/var/run/netns/cni-d99abdcb-285e-e5c0-a0fd-79c141352dfa" Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.147 [INFO][4783] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" iface="eth0" netns="/var/run/netns/cni-d99abdcb-285e-e5c0-a0fd-79c141352dfa" Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.147 [INFO][4783] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" iface="eth0" netns="/var/run/netns/cni-d99abdcb-285e-e5c0-a0fd-79c141352dfa" Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.147 [INFO][4783] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.147 [INFO][4783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.185 [INFO][4808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" HandleID="k8s-pod-network.fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.186 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.186 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.194 [WARNING][4808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" HandleID="k8s-pod-network.fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.194 [INFO][4808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" HandleID="k8s-pod-network.fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.196 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:36.207732 containerd[1469]: 2025-09-05 00:06:36.203 [INFO][4783] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:36.208903 containerd[1469]: time="2025-09-05T00:06:36.208189815Z" level=info msg="TearDown network for sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\" successfully" Sep 5 00:06:36.208903 containerd[1469]: time="2025-09-05T00:06:36.208252469Z" level=info msg="StopPodSandbox for \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\" returns successfully" Sep 5 00:06:36.212674 containerd[1469]: time="2025-09-05T00:06:36.211375485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579d867b4c-lvxcz,Uid:eea7809f-54a5-40ed-94cd-7efa0cc56047,Namespace:calico-apiserver,Attempt:1,}" Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.141 [INFO][4782] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.142 [INFO][4782] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" iface="eth0" netns="/var/run/netns/cni-3a927b13-3756-dd9b-da85-65ad33df64ee" Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.143 [INFO][4782] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" iface="eth0" netns="/var/run/netns/cni-3a927b13-3756-dd9b-da85-65ad33df64ee" Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.143 [INFO][4782] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" iface="eth0" netns="/var/run/netns/cni-3a927b13-3756-dd9b-da85-65ad33df64ee" Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.143 [INFO][4782] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.143 [INFO][4782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.190 [INFO][4801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" HandleID="k8s-pod-network.555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.190 [INFO][4801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.196 [INFO][4801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.205 [WARNING][4801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" HandleID="k8s-pod-network.555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.205 [INFO][4801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" HandleID="k8s-pod-network.555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.208 [INFO][4801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:36.221251 containerd[1469]: 2025-09-05 00:06:36.215 [INFO][4782] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:36.222079 containerd[1469]: time="2025-09-05T00:06:36.222035344Z" level=info msg="TearDown network for sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\" successfully" Sep 5 00:06:36.222120 containerd[1469]: time="2025-09-05T00:06:36.222077869Z" level=info msg="StopPodSandbox for \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\" returns successfully" Sep 5 00:06:36.223425 kubelet[2553]: E0905 00:06:36.223199 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:36.226475 containerd[1469]: time="2025-09-05T00:06:36.226059450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6gz8l,Uid:fb5a9823-833b-45bb-9546-46d6c264eca2,Namespace:kube-system,Attempt:1,}" Sep 5 00:06:36.293027 kubelet[2553]: E0905 00:06:36.292948 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:36.313260 kubelet[2553]: I0905 00:06:36.313185 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8skvt" podStartSLOduration=42.313164817 podStartE2EDuration="42.313164817s" podCreationTimestamp="2025-09-05 00:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:06:36.311893481 +0000 UTC m=+48.388855263" watchObservedRunningTime="2025-09-05 00:06:36.313164817 +0000 UTC m=+48.390126599" Sep 5 00:06:36.395304 systemd-networkd[1402]: cali742170eaba6: Link UP Sep 5 00:06:36.396006 systemd-networkd[1402]: cali742170eaba6: Gained carrier Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.284 [INFO][4817] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0 calico-apiserver-579d867b4c- calico-apiserver eea7809f-54a5-40ed-94cd-7efa0cc56047 1017 0 2025-09-05 00:06:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:579d867b4c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-579d867b4c-lvxcz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali742170eaba6 [] [] }} ContainerID="09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-lvxcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.285 [INFO][4817] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-lvxcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.329 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" HandleID="k8s-pod-network.09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.330 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" HandleID="k8s-pod-network.09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d7770), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-579d867b4c-lvxcz", "timestamp":"2025-09-05 00:06:36.32994617 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.330 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.330 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.330 [INFO][4844] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.344 [INFO][4844] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" host="localhost" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.351 [INFO][4844] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.356 [INFO][4844] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.364 [INFO][4844] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.371 [INFO][4844] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.371 [INFO][4844] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" host="localhost" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.374 [INFO][4844] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1 Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.379 [INFO][4844] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" host="localhost" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.387 [INFO][4844] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" host="localhost" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.387 [INFO][4844] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" host="localhost" Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.387 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:36.419654 containerd[1469]: 2025-09-05 00:06:36.387 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" HandleID="k8s-pod-network.09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:36.420278 containerd[1469]: 2025-09-05 00:06:36.392 [INFO][4817] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-lvxcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0", GenerateName:"calico-apiserver-579d867b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eea7809f-54a5-40ed-94cd-7efa0cc56047", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579d867b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-579d867b4c-lvxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali742170eaba6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:36.420278 containerd[1469]: 2025-09-05 00:06:36.392 [INFO][4817] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-lvxcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:36.420278 containerd[1469]: 2025-09-05 00:06:36.392 [INFO][4817] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali742170eaba6 ContainerID="09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-lvxcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:36.420278 containerd[1469]: 2025-09-05 00:06:36.396 [INFO][4817] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-lvxcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:36.420278 containerd[1469]: 2025-09-05 00:06:36.397 [INFO][4817] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-lvxcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0", GenerateName:"calico-apiserver-579d867b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eea7809f-54a5-40ed-94cd-7efa0cc56047", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579d867b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1", Pod:"calico-apiserver-579d867b4c-lvxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali742170eaba6", MAC:"f2:72:e0:29:8b:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:36.420278 containerd[1469]: 2025-09-05 00:06:36.410 [INFO][4817] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1" Namespace="calico-apiserver" Pod="calico-apiserver-579d867b4c-lvxcz" WorkloadEndpoint="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:36.448782 containerd[1469]: time="2025-09-05T00:06:36.447596056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:36.448782 containerd[1469]: time="2025-09-05T00:06:36.448651733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:36.448782 containerd[1469]: time="2025-09-05T00:06:36.448669117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:36.448972 containerd[1469]: time="2025-09-05T00:06:36.448797983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:36.489263 systemd[1]: Started cri-containerd-09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1.scope - libcontainer container 09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1. Sep 5 00:06:36.506084 systemd-networkd[1402]: calia27ebc39e5f: Gained IPv6LL Sep 5 00:06:36.506577 systemd-networkd[1402]: cali75696995526: Link UP Sep 5 00:06:36.507288 systemd-networkd[1402]: cali75696995526: Gained carrier Sep 5 00:06:36.516305 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.308 [INFO][4829] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0 coredns-7c65d6cfc9- kube-system fb5a9823-833b-45bb-9546-46d6c264eca2 1016 0 2025-09-05 00:05:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-6gz8l eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali75696995526 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6gz8l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6gz8l-" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.308 [INFO][4829] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6gz8l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.352 [INFO][4853] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" HandleID="k8s-pod-network.351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.353 [INFO][4853] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" HandleID="k8s-pod-network.351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-6gz8l", "timestamp":"2025-09-05 00:06:36.352875498 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.353 [INFO][4853] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.387 [INFO][4853] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.387 [INFO][4853] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.445 [INFO][4853] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" host="localhost" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.452 [INFO][4853] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.458 [INFO][4853] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.468 [INFO][4853] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.473 [INFO][4853] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.473 [INFO][4853] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" host="localhost" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.478 [INFO][4853] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18 Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.488 [INFO][4853] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" host="localhost" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.497 [INFO][4853] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" host="localhost" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.498 [INFO][4853] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" host="localhost" Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.498 [INFO][4853] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:36.536323 containerd[1469]: 2025-09-05 00:06:36.498 [INFO][4853] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" HandleID="k8s-pod-network.351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:36.536913 containerd[1469]: 2025-09-05 00:06:36.502 [INFO][4829] cni-plugin/k8s.go 418: Populated endpoint ContainerID="351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6gz8l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fb5a9823-833b-45bb-9546-46d6c264eca2", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-6gz8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75696995526", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:36.536913 containerd[1469]: 2025-09-05 00:06:36.502 [INFO][4829] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6gz8l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:36.536913 containerd[1469]: 2025-09-05 00:06:36.502 [INFO][4829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75696995526 ContainerID="351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6gz8l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:36.536913 containerd[1469]: 2025-09-05 00:06:36.508 [INFO][4829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6gz8l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:36.536913 containerd[1469]: 2025-09-05 00:06:36.509 [INFO][4829] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6gz8l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fb5a9823-833b-45bb-9546-46d6c264eca2", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18", Pod:"coredns-7c65d6cfc9-6gz8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75696995526", MAC:"16:04:f8:23:24:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:36.536913 containerd[1469]: 2025-09-05 00:06:36.525 [INFO][4829] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6gz8l" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:36.561549 containerd[1469]: time="2025-09-05T00:06:36.561394749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579d867b4c-lvxcz,Uid:eea7809f-54a5-40ed-94cd-7efa0cc56047,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1\"" Sep 5 00:06:36.572123 containerd[1469]: time="2025-09-05T00:06:36.571554836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:36.572123 containerd[1469]: time="2025-09-05T00:06:36.571667950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:36.572123 containerd[1469]: time="2025-09-05T00:06:36.571684994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:36.572123 containerd[1469]: time="2025-09-05T00:06:36.571935572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:36.603131 systemd[1]: Started cri-containerd-351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18.scope - libcontainer container 351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18. Sep 5 00:06:36.621304 systemd[1]: run-netns-cni\x2dd99abdcb\x2d285e\x2de5c0\x2da0fd\x2d79c141352dfa.mount: Deactivated successfully. Sep 5 00:06:36.621770 systemd[1]: run-netns-cni\x2d3a927b13\x2d3756\x2ddd9b\x2dda85\x2d65ad33df64ee.mount: Deactivated successfully. Sep 5 00:06:36.630721 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:06:36.662874 containerd[1469]: time="2025-09-05T00:06:36.662826011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6gz8l,Uid:fb5a9823-833b-45bb-9546-46d6c264eca2,Namespace:kube-system,Attempt:1,} returns sandbox id \"351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18\"" Sep 5 00:06:36.663635 kubelet[2553]: E0905 00:06:36.663611 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:36.666357 containerd[1469]: time="2025-09-05T00:06:36.666302007Z" level=info msg="CreateContainer within sandbox \"351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 00:06:36.720520 containerd[1469]: time="2025-09-05T00:06:36.720472106Z" level=info msg="CreateContainer within sandbox \"351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9760ae8e278f36ead5b9dafaabfa02fb1096a9f3444d74eaa25e7a06fbf7f808\"" Sep 5 00:06:36.721490 containerd[1469]: time="2025-09-05T00:06:36.721462884Z" level=info msg="StartContainer for \"9760ae8e278f36ead5b9dafaabfa02fb1096a9f3444d74eaa25e7a06fbf7f808\"" Sep 5 00:06:36.760274 systemd[1]: Started cri-containerd-9760ae8e278f36ead5b9dafaabfa02fb1096a9f3444d74eaa25e7a06fbf7f808.scope - libcontainer container 9760ae8e278f36ead5b9dafaabfa02fb1096a9f3444d74eaa25e7a06fbf7f808. Sep 5 00:06:36.760704 systemd-networkd[1402]: cali5ec3839a929: Gained IPv6LL Sep 5 00:06:36.854295 containerd[1469]: time="2025-09-05T00:06:36.854201990Z" level=info msg="StartContainer for \"9760ae8e278f36ead5b9dafaabfa02fb1096a9f3444d74eaa25e7a06fbf7f808\" returns successfully" Sep 5 00:06:37.080155 systemd-networkd[1402]: cali66b9aef321b: Gained IPv6LL Sep 5 00:06:37.208259 systemd-networkd[1402]: cali7c502d625f3: Gained IPv6LL Sep 5 00:06:37.309302 kubelet[2553]: E0905 00:06:37.308945 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:37.310171 kubelet[2553]: E0905 00:06:37.309047 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:37.335793 kubelet[2553]: I0905 00:06:37.335606 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6gz8l" podStartSLOduration=43.335569251 podStartE2EDuration="43.335569251s" podCreationTimestamp="2025-09-05 00:05:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 00:06:37.329304529 +0000 UTC m=+49.406266311" watchObservedRunningTime="2025-09-05 00:06:37.335569251 +0000 UTC m=+49.412531033" Sep 5 00:06:37.400225 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Sep 5 00:06:37.464753 systemd-networkd[1402]: cali742170eaba6: Gained IPv6LL Sep 5 00:06:37.784186 systemd-networkd[1402]: cali75696995526: Gained IPv6LL Sep 5 00:06:37.873903 containerd[1469]: time="2025-09-05T00:06:37.873822916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:37.880258 containerd[1469]: time="2025-09-05T00:06:37.880196854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 5 00:06:37.884692 containerd[1469]: time="2025-09-05T00:06:37.884636492Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:37.888877 containerd[1469]: time="2025-09-05T00:06:37.888843166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:37.889511 containerd[1469]: time="2025-09-05T00:06:37.889476724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.323932959s" Sep 5 00:06:37.889577 containerd[1469]: time="2025-09-05T00:06:37.889511894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:06:37.890461 containerd[1469]: time="2025-09-05T00:06:37.890440929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:06:37.891605 containerd[1469]: time="2025-09-05T00:06:37.891574310Z" level=info msg="CreateContainer within sandbox \"5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:06:37.973889 containerd[1469]: time="2025-09-05T00:06:37.973828907Z" level=info msg="CreateContainer within sandbox \"5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f\"" Sep 5 00:06:37.974430 containerd[1469]: time="2025-09-05T00:06:37.974400282Z" level=info msg="StartContainer for \"48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f\"" Sep 5 00:06:38.010198 systemd[1]: Started cri-containerd-48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f.scope - libcontainer container 48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f. Sep 5 00:06:38.081630 containerd[1469]: time="2025-09-05T00:06:38.081474393Z" level=info msg="StartContainer for \"48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f\" returns successfully" Sep 5 00:06:38.312357 kubelet[2553]: E0905 00:06:38.312303 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:38.312822 kubelet[2553]: E0905 00:06:38.312749 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:06:38.404961 containerd[1469]: time="2025-09-05T00:06:38.403278744Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:38.414178 containerd[1469]: time="2025-09-05T00:06:38.414099032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:06:38.417184 containerd[1469]: time="2025-09-05T00:06:38.417131110Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 526.593069ms" Sep 5 00:06:38.417184 containerd[1469]: time="2025-09-05T00:06:38.417181739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:06:38.420734 containerd[1469]: time="2025-09-05T00:06:38.419034627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 00:06:38.421912 containerd[1469]: time="2025-09-05T00:06:38.421827584Z" level=info msg="CreateContainer within sandbox \"c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:06:38.657897 containerd[1469]: time="2025-09-05T00:06:38.657733824Z" level=info msg="CreateContainer within sandbox \"c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5cb5feeeb4657cd9c67592369a9ff1cb448abd13398db57aa7ccc5e31688beee\"" Sep 5 00:06:38.658660 containerd[1469]: time="2025-09-05T00:06:38.658617232Z" level=info msg="StartContainer for \"5cb5feeeb4657cd9c67592369a9ff1cb448abd13398db57aa7ccc5e31688beee\"" Sep 5 00:06:38.697224 systemd[1]: Started cri-containerd-5cb5feeeb4657cd9c67592369a9ff1cb448abd13398db57aa7ccc5e31688beee.scope - libcontainer container 5cb5feeeb4657cd9c67592369a9ff1cb448abd13398db57aa7ccc5e31688beee. Sep 5 00:06:38.864103 containerd[1469]: time="2025-09-05T00:06:38.864038287Z" level=info msg="StartContainer for \"5cb5feeeb4657cd9c67592369a9ff1cb448abd13398db57aa7ccc5e31688beee\" returns successfully" Sep 5 00:06:39.316024 kubelet[2553]: I0905 00:06:39.315960 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:06:39.389592 kubelet[2553]: I0905 00:06:39.389522 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56d8f46b5d-7zqzw" podStartSLOduration=29.636026922 podStartE2EDuration="33.389496395s" podCreationTimestamp="2025-09-05 00:06:06 +0000 UTC" firstStartedPulling="2025-09-05 00:06:34.664807012 +0000 UTC m=+46.741768794" lastFinishedPulling="2025-09-05 00:06:38.418276485 +0000 UTC m=+50.495238267" observedRunningTime="2025-09-05 00:06:39.3894006 +0000 UTC m=+51.466362382" watchObservedRunningTime="2025-09-05 00:06:39.389496395 +0000 UTC m=+51.466458177" Sep 5 00:06:39.389967 kubelet[2553]: I0905 00:06:39.389903 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-579d867b4c-ljpgm" podStartSLOduration=31.064845282 podStartE2EDuration="34.389898397s" podCreationTimestamp="2025-09-05 00:06:05 +0000 UTC" firstStartedPulling="2025-09-05 00:06:34.565278117 +0000 UTC m=+46.642239899" lastFinishedPulling="2025-09-05 00:06:37.890331222 +0000 UTC m=+49.967293014" observedRunningTime="2025-09-05 00:06:38.357690602 +0000 UTC m=+50.434652374" watchObservedRunningTime="2025-09-05 00:06:39.389898397 +0000 UTC m=+51.466860169" Sep 5 00:06:39.395736 systemd[1]: Started sshd@10-10.0.0.14:22-10.0.0.1:43722.service - OpenSSH per-connection server daemon (10.0.0.1:43722). Sep 5 00:06:39.460641 sshd[5147]: Accepted publickey for core from 10.0.0.1 port 43722 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:39.462777 sshd[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:39.470408 systemd-logind[1453]: New session 11 of user core. Sep 5 00:06:39.476282 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 00:06:39.635129 sshd[5147]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:39.640450 systemd[1]: sshd@10-10.0.0.14:22-10.0.0.1:43722.service: Deactivated successfully. Sep 5 00:06:39.644518 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 00:06:39.647347 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Sep 5 00:06:39.648296 systemd-logind[1453]: Removed session 11. Sep 5 00:06:40.318023 kubelet[2553]: I0905 00:06:40.317968 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:06:41.481264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301476517.mount: Deactivated successfully. Sep 5 00:06:42.230915 containerd[1469]: time="2025-09-05T00:06:42.230841416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:42.232394 containerd[1469]: time="2025-09-05T00:06:42.232344743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 5 00:06:42.234279 containerd[1469]: time="2025-09-05T00:06:42.234233535Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:42.237884 containerd[1469]: time="2025-09-05T00:06:42.237808089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:42.239049 containerd[1469]: time="2025-09-05T00:06:42.238997259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 3.81984366s" Sep 5 00:06:42.239049 containerd[1469]: time="2025-09-05T00:06:42.239040588Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 5 00:06:42.240477 containerd[1469]: time="2025-09-05T00:06:42.240244436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 00:06:42.241426 containerd[1469]: time="2025-09-05T00:06:42.241395015Z" level=info msg="CreateContainer within sandbox \"43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 00:06:42.257263 containerd[1469]: time="2025-09-05T00:06:42.257210912Z" level=info msg="CreateContainer within sandbox \"43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bbbbf5e58e859ae2770704b17bf1f5dee1f668edd4547181942b130667f1ca51\"" Sep 5 00:06:42.258719 containerd[1469]: time="2025-09-05T00:06:42.258574211Z" level=info msg="StartContainer for \"bbbbf5e58e859ae2770704b17bf1f5dee1f668edd4547181942b130667f1ca51\"" Sep 5 00:06:42.300235 systemd[1]: Started cri-containerd-bbbbf5e58e859ae2770704b17bf1f5dee1f668edd4547181942b130667f1ca51.scope - libcontainer container bbbbf5e58e859ae2770704b17bf1f5dee1f668edd4547181942b130667f1ca51. Sep 5 00:06:42.344150 containerd[1469]: time="2025-09-05T00:06:42.344089383Z" level=info msg="StartContainer for \"bbbbf5e58e859ae2770704b17bf1f5dee1f668edd4547181942b130667f1ca51\" returns successfully" Sep 5 00:06:43.357610 systemd[1]: run-containerd-runc-k8s.io-bbbbf5e58e859ae2770704b17bf1f5dee1f668edd4547181942b130667f1ca51-runc.TpdmJQ.mount: Deactivated successfully. Sep 5 00:06:44.652676 systemd[1]: Started sshd@11-10.0.0.14:22-10.0.0.1:57498.service - OpenSSH per-connection server daemon (10.0.0.1:57498). Sep 5 00:06:45.455016 sshd[5268]: Accepted publickey for core from 10.0.0.1 port 57498 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:45.457926 sshd[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:45.463814 systemd-logind[1453]: New session 12 of user core. Sep 5 00:06:45.473267 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 00:06:46.058140 containerd[1469]: time="2025-09-05T00:06:46.058089882Z" level=info msg="StopPodSandbox for \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\"" Sep 5 00:06:46.152366 sshd[5268]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:46.157372 systemd[1]: sshd@11-10.0.0.14:22-10.0.0.1:57498.service: Deactivated successfully. Sep 5 00:06:46.161530 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 00:06:46.162627 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Sep 5 00:06:46.164300 systemd-logind[1453]: Removed session 12. Sep 5 00:06:46.719727 containerd[1469]: time="2025-09-05T00:06:46.719598452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:46.720658 containerd[1469]: time="2025-09-05T00:06:46.720588110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 5 00:06:46.721957 containerd[1469]: time="2025-09-05T00:06:46.721911646Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:46.736950 containerd[1469]: time="2025-09-05T00:06:46.736869919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:46.737697 containerd[1469]: time="2025-09-05T00:06:46.737665750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.497384608s" Sep 5 00:06:46.737767 containerd[1469]: time="2025-09-05T00:06:46.737698861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 5 00:06:46.739594 containerd[1469]: time="2025-09-05T00:06:46.739244578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 00:06:46.757561 containerd[1469]: time="2025-09-05T00:06:46.757498972Z" level=info msg="CreateContainer within sandbox \"068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 00:06:46.758218 kubelet[2553]: I0905 00:06:46.757459 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-jxkr8" podStartSLOduration=32.261663902 podStartE2EDuration="39.757387165s" podCreationTimestamp="2025-09-05 00:06:07 +0000 UTC" firstStartedPulling="2025-09-05 00:06:34.744280522 +0000 UTC m=+46.821242314" lastFinishedPulling="2025-09-05 00:06:42.240003795 +0000 UTC m=+54.316965577" observedRunningTime="2025-09-05 00:06:43.365952474 +0000 UTC m=+55.442914246" watchObservedRunningTime="2025-09-05 00:06:46.757387165 +0000 UTC m=+58.834348947" Sep 5 00:06:46.785625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1020319720.mount: Deactivated successfully. Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.759 [INFO][5296] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.763 [INFO][5296] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" iface="eth0" netns="/var/run/netns/cni-98216f95-42ac-82e5-d0b6-b2080f6328d9" Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.763 [INFO][5296] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" iface="eth0" netns="/var/run/netns/cni-98216f95-42ac-82e5-d0b6-b2080f6328d9" Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.763 [INFO][5296] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" iface="eth0" netns="/var/run/netns/cni-98216f95-42ac-82e5-d0b6-b2080f6328d9" Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.763 [INFO][5296] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.763 [INFO][5296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.794 [INFO][5315] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" HandleID="k8s-pod-network.ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.795 [INFO][5315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.795 [INFO][5315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.804 [WARNING][5315] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" HandleID="k8s-pod-network.ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.804 [INFO][5315] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" HandleID="k8s-pod-network.ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.807 [INFO][5315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:46.816247 containerd[1469]: 2025-09-05 00:06:46.811 [INFO][5296] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:46.816716 containerd[1469]: time="2025-09-05T00:06:46.816655162Z" level=info msg="TearDown network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\" successfully" Sep 5 00:06:46.816716 containerd[1469]: time="2025-09-05T00:06:46.816692761Z" level=info msg="StopPodSandbox for \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\" returns successfully" Sep 5 00:06:46.817840 containerd[1469]: time="2025-09-05T00:06:46.817653096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9hbl,Uid:977e9373-23c2-4b46-9d36-9bf58abbfad5,Namespace:calico-system,Attempt:1,}" Sep 5 00:06:46.820150 systemd[1]: run-netns-cni\x2d98216f95\x2d42ac\x2d82e5\x2dd0b6\x2db2080f6328d9.mount: Deactivated successfully. Sep 5 00:06:46.968804 containerd[1469]: time="2025-09-05T00:06:46.968729563Z" level=info msg="CreateContainer within sandbox \"068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9af60a53cae50aa68563607ec7dd903d932aba29cb5ecbc06bab8ee760542976\"" Sep 5 00:06:46.969477 containerd[1469]: time="2025-09-05T00:06:46.969384493Z" level=info msg="StartContainer for \"9af60a53cae50aa68563607ec7dd903d932aba29cb5ecbc06bab8ee760542976\"" Sep 5 00:06:47.087155 systemd[1]: Started cri-containerd-9af60a53cae50aa68563607ec7dd903d932aba29cb5ecbc06bab8ee760542976.scope - libcontainer container 9af60a53cae50aa68563607ec7dd903d932aba29cb5ecbc06bab8ee760542976. Sep 5 00:06:47.150700 systemd-networkd[1402]: calid4c90211001: Link UP Sep 5 00:06:47.153555 systemd-networkd[1402]: calid4c90211001: Gained carrier Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.050 [INFO][5325] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--s9hbl-eth0 csi-node-driver- calico-system 977e9373-23c2-4b46-9d36-9bf58abbfad5 1122 0 2025-09-05 00:06:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-s9hbl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid4c90211001 [] [] }} ContainerID="9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" Namespace="calico-system" Pod="csi-node-driver-s9hbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9hbl-" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.050 [INFO][5325] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" Namespace="calico-system" Pod="csi-node-driver-s9hbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.097 [INFO][5347] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" HandleID="k8s-pod-network.9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.097 [INFO][5347] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" HandleID="k8s-pod-network.9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df8c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-s9hbl", "timestamp":"2025-09-05 00:06:47.097402615 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.098 [INFO][5347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.098 [INFO][5347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.098 [INFO][5347] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.107 [INFO][5347] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" host="localhost" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.114 [INFO][5347] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.119 [INFO][5347] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.121 [INFO][5347] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.123 [INFO][5347] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.123 [INFO][5347] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" host="localhost" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.124 [INFO][5347] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992 Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.129 [INFO][5347] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" host="localhost" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.140 [INFO][5347] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" host="localhost" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.140 [INFO][5347] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" host="localhost" Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.140 [INFO][5347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:47.228913 containerd[1469]: 2025-09-05 00:06:47.140 [INFO][5347] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" HandleID="k8s-pod-network.9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:47.230269 containerd[1469]: 2025-09-05 00:06:47.145 [INFO][5325] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" Namespace="calico-system" Pod="csi-node-driver-s9hbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9hbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s9hbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"977e9373-23c2-4b46-9d36-9bf58abbfad5", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-s9hbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid4c90211001", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:47.230269 containerd[1469]: 2025-09-05 00:06:47.145 [INFO][5325] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" Namespace="calico-system" Pod="csi-node-driver-s9hbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:47.230269 containerd[1469]: 2025-09-05 00:06:47.145 [INFO][5325] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4c90211001 ContainerID="9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" Namespace="calico-system" Pod="csi-node-driver-s9hbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:47.230269 containerd[1469]: 2025-09-05 00:06:47.153 [INFO][5325] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" Namespace="calico-system" Pod="csi-node-driver-s9hbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:47.230269 containerd[1469]: 2025-09-05 00:06:47.157 [INFO][5325] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" Namespace="calico-system" Pod="csi-node-driver-s9hbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9hbl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s9hbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"977e9373-23c2-4b46-9d36-9bf58abbfad5", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992", Pod:"csi-node-driver-s9hbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid4c90211001", MAC:"2e:a4:20:9e:60:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:47.230269 containerd[1469]: 2025-09-05 00:06:47.225 [INFO][5325] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992" Namespace="calico-system" Pod="csi-node-driver-s9hbl" WorkloadEndpoint="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:47.466829 containerd[1469]: time="2025-09-05T00:06:47.466770287Z" level=info msg="StartContainer for \"9af60a53cae50aa68563607ec7dd903d932aba29cb5ecbc06bab8ee760542976\" returns successfully" Sep 5 00:06:47.503402 kubelet[2553]: I0905 00:06:47.503330 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59b7b569f5-wfvvx" podStartSLOduration=28.267665422 podStartE2EDuration="39.503301059s" podCreationTimestamp="2025-09-05 00:06:08 +0000 UTC" firstStartedPulling="2025-09-05 00:06:35.503187492 +0000 UTC m=+47.580149274" lastFinishedPulling="2025-09-05 00:06:46.738823129 +0000 UTC m=+58.815784911" observedRunningTime="2025-09-05 00:06:47.502887844 +0000 UTC m=+59.579849626" watchObservedRunningTime="2025-09-05 00:06:47.503301059 +0000 UTC m=+59.580262841" Sep 5 00:06:47.514035 containerd[1469]: time="2025-09-05T00:06:47.513768888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 5 00:06:47.515099 containerd[1469]: time="2025-09-05T00:06:47.514717775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 5 00:06:47.515099 containerd[1469]: time="2025-09-05T00:06:47.514745235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:47.515099 containerd[1469]: time="2025-09-05T00:06:47.514883412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 5 00:06:47.542232 systemd[1]: Started cri-containerd-9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992.scope - libcontainer container 9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992. Sep 5 00:06:47.555632 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 00:06:47.576772 containerd[1469]: time="2025-09-05T00:06:47.576711838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s9hbl,Uid:977e9373-23c2-4b46-9d36-9bf58abbfad5,Namespace:calico-system,Attempt:1,} returns sandbox id \"9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992\"" Sep 5 00:06:48.056516 containerd[1469]: time="2025-09-05T00:06:48.056462250Z" level=info msg="StopPodSandbox for \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\"" Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.100 [WARNING][5482] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0", GenerateName:"calico-apiserver-579d867b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579d867b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d", Pod:"calico-apiserver-579d867b4c-ljpgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieddfabaf632", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.101 [INFO][5482] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.101 [INFO][5482] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" iface="eth0" netns="" Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.101 [INFO][5482] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.101 [INFO][5482] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.125 [INFO][5493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" HandleID="k8s-pod-network.8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.126 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.126 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.133 [WARNING][5493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" HandleID="k8s-pod-network.8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.133 [INFO][5493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" HandleID="k8s-pod-network.8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.134 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:48.140665 containerd[1469]: 2025-09-05 00:06:48.137 [INFO][5482] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:48.141278 containerd[1469]: time="2025-09-05T00:06:48.140717673Z" level=info msg="TearDown network for sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\" successfully" Sep 5 00:06:48.141278 containerd[1469]: time="2025-09-05T00:06:48.140748109Z" level=info msg="StopPodSandbox for \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\" returns successfully" Sep 5 00:06:48.173273 containerd[1469]: time="2025-09-05T00:06:48.173204073Z" level=info msg="RemovePodSandbox for \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\"" Sep 5 00:06:48.175947 containerd[1469]: time="2025-09-05T00:06:48.175906354Z" level=info msg="Forcibly stopping sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\"" Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.213 [WARNING][5511] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0", GenerateName:"calico-apiserver-579d867b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579d867b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d", Pod:"calico-apiserver-579d867b4c-ljpgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calieddfabaf632", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.214 [INFO][5511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.214 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" iface="eth0" netns="" Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.214 [INFO][5511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.214 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.238 [INFO][5520] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" HandleID="k8s-pod-network.8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.238 [INFO][5520] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.238 [INFO][5520] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.243 [WARNING][5520] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" HandleID="k8s-pod-network.8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.243 [INFO][5520] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" HandleID="k8s-pod-network.8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.247 [INFO][5520] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:48.257440 containerd[1469]: 2025-09-05 00:06:48.251 [INFO][5511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19" Sep 5 00:06:48.257440 containerd[1469]: time="2025-09-05T00:06:48.254917921Z" level=info msg="TearDown network for sandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\" successfully" Sep 5 00:06:48.278014 containerd[1469]: time="2025-09-05T00:06:48.277906787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:06:48.278207 containerd[1469]: time="2025-09-05T00:06:48.278042719Z" level=info msg="RemovePodSandbox \"8007de3702b5bb5e6d97b1166db0f13efbef14222e1e2cb0fdc9a937e22eae19\" returns successfully" Sep 5 00:06:48.280654 containerd[1469]: time="2025-09-05T00:06:48.280609179Z" level=info msg="StopPodSandbox for \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\"" Sep 5 00:06:48.345537 systemd-networkd[1402]: calid4c90211001: Gained IPv6LL Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.314 [WARNING][5538] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0", GenerateName:"calico-apiserver-579d867b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eea7809f-54a5-40ed-94cd-7efa0cc56047", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579d867b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1", Pod:"calico-apiserver-579d867b4c-lvxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali742170eaba6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.315 [INFO][5538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.315 [INFO][5538] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" iface="eth0" netns="" Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.315 [INFO][5538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.315 [INFO][5538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.340 [INFO][5547] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" HandleID="k8s-pod-network.fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.340 [INFO][5547] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.340 [INFO][5547] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.349 [WARNING][5547] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" HandleID="k8s-pod-network.fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.349 [INFO][5547] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" HandleID="k8s-pod-network.fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.351 [INFO][5547] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:48.356797 containerd[1469]: 2025-09-05 00:06:48.354 [INFO][5538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:48.357286 containerd[1469]: time="2025-09-05T00:06:48.356853984Z" level=info msg="TearDown network for sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\" successfully" Sep 5 00:06:48.357286 containerd[1469]: time="2025-09-05T00:06:48.356879823Z" level=info msg="StopPodSandbox for \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\" returns successfully" Sep 5 00:06:48.357420 containerd[1469]: time="2025-09-05T00:06:48.357399586Z" level=info msg="RemovePodSandbox for \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\"" Sep 5 00:06:48.357453 containerd[1469]: time="2025-09-05T00:06:48.357426636Z" level=info msg="Forcibly stopping sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\"" Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.392 [WARNING][5566] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0", GenerateName:"calico-apiserver-579d867b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"eea7809f-54a5-40ed-94cd-7efa0cc56047", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579d867b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1", Pod:"calico-apiserver-579d867b4c-lvxcz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali742170eaba6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.393 [INFO][5566] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.393 [INFO][5566] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" iface="eth0" netns="" Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.393 [INFO][5566] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.393 [INFO][5566] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.415 [INFO][5575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" HandleID="k8s-pod-network.fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.416 [INFO][5575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.416 [INFO][5575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.422 [WARNING][5575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" HandleID="k8s-pod-network.fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.422 [INFO][5575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" HandleID="k8s-pod-network.fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Workload="localhost-k8s-calico--apiserver--579d867b4c--lvxcz-eth0" Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.423 [INFO][5575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:48.429686 containerd[1469]: 2025-09-05 00:06:48.426 [INFO][5566] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145" Sep 5 00:06:48.430657 containerd[1469]: time="2025-09-05T00:06:48.429733940Z" level=info msg="TearDown network for sandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\" successfully" Sep 5 00:06:48.433973 containerd[1469]: time="2025-09-05T00:06:48.433931184Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:06:48.434076 containerd[1469]: time="2025-09-05T00:06:48.434055664Z" level=info msg="RemovePodSandbox \"fcbd776cadfd4c85805d1246cc4410e02d072e27a2952d21c44669b1c6036145\" returns successfully" Sep 5 00:06:48.434731 containerd[1469]: time="2025-09-05T00:06:48.434694490Z" level=info msg="StopPodSandbox for \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\"" Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.473 [WARNING][5593] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"70048a66-95e5-4bf3-806b-b05cba03386e", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2", Pod:"coredns-7c65d6cfc9-8skvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ec3839a929", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.473 [INFO][5593] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.473 [INFO][5593] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" iface="eth0" netns="" Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.473 [INFO][5593] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.473 [INFO][5593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.497 [INFO][5602] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" HandleID="k8s-pod-network.6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.497 [INFO][5602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.497 [INFO][5602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.505 [WARNING][5602] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" HandleID="k8s-pod-network.6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.505 [INFO][5602] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" HandleID="k8s-pod-network.6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.507 [INFO][5602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:48.513147 containerd[1469]: 2025-09-05 00:06:48.510 [INFO][5593] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:48.513640 containerd[1469]: time="2025-09-05T00:06:48.513199156Z" level=info msg="TearDown network for sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\" successfully" Sep 5 00:06:48.513640 containerd[1469]: time="2025-09-05T00:06:48.513228610Z" level=info msg="StopPodSandbox for \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\" returns successfully" Sep 5 00:06:48.513905 containerd[1469]: time="2025-09-05T00:06:48.513869590Z" level=info msg="RemovePodSandbox for \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\"" Sep 5 00:06:48.513945 containerd[1469]: time="2025-09-05T00:06:48.513913762Z" level=info msg="Forcibly stopping sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\"" Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.549 [WARNING][5621] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"70048a66-95e5-4bf3-806b-b05cba03386e", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f69862ad92ebed83d81dd27e7d07a326af1e7bf788b78128a9acedd86cf61d2", Pod:"coredns-7c65d6cfc9-8skvt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ec3839a929", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.550 [INFO][5621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.550 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" iface="eth0" netns="" Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.550 [INFO][5621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.550 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.576 [INFO][5630] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" HandleID="k8s-pod-network.6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.576 [INFO][5630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.577 [INFO][5630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.584 [WARNING][5630] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" HandleID="k8s-pod-network.6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.584 [INFO][5630] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" HandleID="k8s-pod-network.6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Workload="localhost-k8s-coredns--7c65d6cfc9--8skvt-eth0" Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.586 [INFO][5630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:48.593527 containerd[1469]: 2025-09-05 00:06:48.589 [INFO][5621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7" Sep 5 00:06:48.594238 containerd[1469]: time="2025-09-05T00:06:48.593569844Z" level=info msg="TearDown network for sandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\" successfully" Sep 5 00:06:48.613428 containerd[1469]: time="2025-09-05T00:06:48.613251967Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:06:48.613428 containerd[1469]: time="2025-09-05T00:06:48.613326224Z" level=info msg="RemovePodSandbox \"6194cf841311786ff0db494d2089179a44c1fa335a75b9bbd0ddab980228cbb7\" returns successfully" Sep 5 00:06:48.614374 containerd[1469]: time="2025-09-05T00:06:48.613876976Z" level=info msg="StopPodSandbox for \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\"" Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.657 [WARNING][5648] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" WorkloadEndpoint="localhost-k8s-whisker--66fd4cc6cf--ntv2j-eth0" Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.658 [INFO][5648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.658 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" iface="eth0" netns="" Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.658 [INFO][5648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.658 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.685 [INFO][5660] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" HandleID="k8s-pod-network.a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Workload="localhost-k8s-whisker--66fd4cc6cf--ntv2j-eth0" Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.685 [INFO][5660] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.685 [INFO][5660] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.691 [WARNING][5660] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" HandleID="k8s-pod-network.a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Workload="localhost-k8s-whisker--66fd4cc6cf--ntv2j-eth0" Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.691 [INFO][5660] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" HandleID="k8s-pod-network.a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Workload="localhost-k8s-whisker--66fd4cc6cf--ntv2j-eth0" Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.693 [INFO][5660] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:48.701348 containerd[1469]: 2025-09-05 00:06:48.696 [INFO][5648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:48.702026 containerd[1469]: time="2025-09-05T00:06:48.701409726Z" level=info msg="TearDown network for sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\" successfully" Sep 5 00:06:48.702026 containerd[1469]: time="2025-09-05T00:06:48.701441455Z" level=info msg="StopPodSandbox for \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\" returns successfully" Sep 5 00:06:48.702078 containerd[1469]: time="2025-09-05T00:06:48.702038963Z" level=info msg="RemovePodSandbox for \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\"" Sep 5 00:06:48.702078 containerd[1469]: time="2025-09-05T00:06:48.702072275Z" level=info msg="Forcibly stopping sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\"" Sep 5 00:06:48.759954 containerd[1469]: time="2025-09-05T00:06:48.759076046Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:48.760214 containerd[1469]: time="2025-09-05T00:06:48.760192125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 5 00:06:48.761227 containerd[1469]: time="2025-09-05T00:06:48.761203130Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:48.782907 containerd[1469]: time="2025-09-05T00:06:48.766164792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:48.783475 containerd[1469]: time="2025-09-05T00:06:48.767097312Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.027793254s" Sep 5 00:06:48.783532 containerd[1469]: time="2025-09-05T00:06:48.783481568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.742 [WARNING][5678] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" WorkloadEndpoint="localhost-k8s-whisker--66fd4cc6cf--ntv2j-eth0" Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.742 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.742 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" iface="eth0" netns="" Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.742 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.742 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.769 [INFO][5687] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" HandleID="k8s-pod-network.a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Workload="localhost-k8s-whisker--66fd4cc6cf--ntv2j-eth0" Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.770 [INFO][5687] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.770 [INFO][5687] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.778 [WARNING][5687] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" HandleID="k8s-pod-network.a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Workload="localhost-k8s-whisker--66fd4cc6cf--ntv2j-eth0" Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.778 [INFO][5687] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" HandleID="k8s-pod-network.a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Workload="localhost-k8s-whisker--66fd4cc6cf--ntv2j-eth0" Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.779 [INFO][5687] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:48.786219 containerd[1469]: 2025-09-05 00:06:48.782 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3" Sep 5 00:06:48.786564 containerd[1469]: time="2025-09-05T00:06:48.786287563Z" level=info msg="TearDown network for sandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\" successfully" Sep 5 00:06:48.790487 containerd[1469]: time="2025-09-05T00:06:48.790359735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 00:06:48.791290 containerd[1469]: time="2025-09-05T00:06:48.791242071Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:06:48.791375 containerd[1469]: time="2025-09-05T00:06:48.791315698Z" level=info msg="RemovePodSandbox \"a54c23066dda4f0212bef9b33d8c587202c653407e3cb7dd26a97265718c5ff3\" returns successfully" Sep 5 00:06:48.791892 containerd[1469]: time="2025-09-05T00:06:48.791822287Z" level=info msg="StopPodSandbox for \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\"" Sep 5 00:06:48.792014 containerd[1469]: time="2025-09-05T00:06:48.791928565Z" level=info msg="CreateContainer within sandbox \"3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 00:06:48.805458 containerd[1469]: time="2025-09-05T00:06:48.805295996Z" level=info msg="CreateContainer within sandbox \"3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7a250c31a7ec3791a5aefde4893923a1427e71801d37d13b57ff8208ee5c9992\"" Sep 5 00:06:48.806950 containerd[1469]: time="2025-09-05T00:06:48.806039495Z" level=info msg="StartContainer for \"7a250c31a7ec3791a5aefde4893923a1427e71801d37d13b57ff8208ee5c9992\"" Sep 5 00:06:48.848280 systemd[1]: Started cri-containerd-7a250c31a7ec3791a5aefde4893923a1427e71801d37d13b57ff8208ee5c9992.scope - libcontainer container 7a250c31a7ec3791a5aefde4893923a1427e71801d37d13b57ff8208ee5c9992. Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.846 [WARNING][5705] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fb5a9823-833b-45bb-9546-46d6c264eca2", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18", Pod:"coredns-7c65d6cfc9-6gz8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75696995526", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.846 [INFO][5705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.846 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" iface="eth0" netns="" Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.846 [INFO][5705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.846 [INFO][5705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.882 [INFO][5731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" HandleID="k8s-pod-network.555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.882 [INFO][5731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.882 [INFO][5731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.891 [WARNING][5731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" HandleID="k8s-pod-network.555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.891 [INFO][5731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" HandleID="k8s-pod-network.555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.893 [INFO][5731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:48.900082 containerd[1469]: 2025-09-05 00:06:48.896 [INFO][5705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:48.900740 containerd[1469]: time="2025-09-05T00:06:48.900087709Z" level=info msg="TearDown network for sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\" successfully" Sep 5 00:06:48.900740 containerd[1469]: time="2025-09-05T00:06:48.900122123Z" level=info msg="StopPodSandbox for \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\" returns successfully" Sep 5 00:06:48.901222 containerd[1469]: time="2025-09-05T00:06:48.901195053Z" level=info msg="RemovePodSandbox for \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\"" Sep 5 00:06:48.901303 containerd[1469]: time="2025-09-05T00:06:48.901228765Z" level=info msg="Forcibly stopping sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\"" Sep 5 00:06:48.905731 containerd[1469]: time="2025-09-05T00:06:48.905654694Z" level=info msg="StartContainer for \"7a250c31a7ec3791a5aefde4893923a1427e71801d37d13b57ff8208ee5c9992\" returns successfully" Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:48.937 [WARNING][5765] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"fb5a9823-833b-45bb-9546-46d6c264eca2", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 5, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"351c803c9d432b3bfc10730f2c3feb874d7413364fd78ac2be9bf18c4eadbb18", Pod:"coredns-7c65d6cfc9-6gz8l", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali75696995526", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:48.937 [INFO][5765] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:48.937 [INFO][5765] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" iface="eth0" netns="" Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:48.937 [INFO][5765] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:48.937 [INFO][5765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:48.962 [INFO][5777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" HandleID="k8s-pod-network.555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:48.962 [INFO][5777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:48.962 [INFO][5777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:49.051 [WARNING][5777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" HandleID="k8s-pod-network.555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:49.051 [INFO][5777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" HandleID="k8s-pod-network.555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Workload="localhost-k8s-coredns--7c65d6cfc9--6gz8l-eth0" Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:49.074 [INFO][5777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:49.080587 containerd[1469]: 2025-09-05 00:06:49.077 [INFO][5765] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da" Sep 5 00:06:49.081348 containerd[1469]: time="2025-09-05T00:06:49.080636258Z" level=info msg="TearDown network for sandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\" successfully" Sep 5 00:06:49.203390 containerd[1469]: time="2025-09-05T00:06:49.203213952Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:06:49.203390 containerd[1469]: time="2025-09-05T00:06:49.203300773Z" level=info msg="RemovePodSandbox \"555881da39d0d5638a8fe67177134c42fbf8f7665b515ef4ea00e77d588605da\" returns successfully" Sep 5 00:06:49.203897 containerd[1469]: time="2025-09-05T00:06:49.203872806Z" level=info msg="StopPodSandbox for \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\"" Sep 5 00:06:49.219422 containerd[1469]: time="2025-09-05T00:06:49.218586391Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:49.219808 containerd[1469]: time="2025-09-05T00:06:49.219766904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 5 00:06:49.222673 containerd[1469]: time="2025-09-05T00:06:49.222602001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 432.194087ms" Sep 5 00:06:49.222673 containerd[1469]: time="2025-09-05T00:06:49.222665228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 5 00:06:49.223800 containerd[1469]: time="2025-09-05T00:06:49.223773898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 00:06:49.227374 containerd[1469]: time="2025-09-05T00:06:49.227210893Z" level=info msg="CreateContainer within sandbox \"09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 00:06:49.244741 containerd[1469]: time="2025-09-05T00:06:49.244675308Z" level=info msg="CreateContainer within sandbox \"09469c138bf9f7d2e8ebd7421306fe52ae4daf1c60d55c2b67d252b5fa94bea1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"763151f9474ead2946300515e54ba04f370fcbcb977ecdcd17fbf01fccc53e37\"" Sep 5 00:06:49.245515 containerd[1469]: time="2025-09-05T00:06:49.245461659Z" level=info msg="StartContainer for \"763151f9474ead2946300515e54ba04f370fcbcb977ecdcd17fbf01fccc53e37\"" Sep 5 00:06:49.284128 systemd[1]: Started cri-containerd-763151f9474ead2946300515e54ba04f370fcbcb977ecdcd17fbf01fccc53e37.scope - libcontainer container 763151f9474ead2946300515e54ba04f370fcbcb977ecdcd17fbf01fccc53e37. Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.246 [WARNING][5798] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s9hbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"977e9373-23c2-4b46-9d36-9bf58abbfad5", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992", Pod:"csi-node-driver-s9hbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid4c90211001", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.246 [INFO][5798] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.246 [INFO][5798] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" iface="eth0" netns="" Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.246 [INFO][5798] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.246 [INFO][5798] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.309 [INFO][5808] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" HandleID="k8s-pod-network.ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.309 [INFO][5808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.309 [INFO][5808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.315 [WARNING][5808] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" HandleID="k8s-pod-network.ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.315 [INFO][5808] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" HandleID="k8s-pod-network.ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.317 [INFO][5808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:49.325427 containerd[1469]: 2025-09-05 00:06:49.321 [INFO][5798] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:49.326722 containerd[1469]: time="2025-09-05T00:06:49.326528844Z" level=info msg="TearDown network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\" successfully" Sep 5 00:06:49.326722 containerd[1469]: time="2025-09-05T00:06:49.326569229Z" level=info msg="StopPodSandbox for \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\" returns successfully" Sep 5 00:06:49.327171 containerd[1469]: time="2025-09-05T00:06:49.327130712Z" level=info msg="RemovePodSandbox for \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\"" Sep 5 00:06:49.327236 containerd[1469]: time="2025-09-05T00:06:49.327181086Z" level=info msg="Forcibly stopping sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\"" Sep 5 00:06:49.425697 containerd[1469]: time="2025-09-05T00:06:49.425640573Z" level=info msg="StartContainer for \"763151f9474ead2946300515e54ba04f370fcbcb977ecdcd17fbf01fccc53e37\" returns successfully" Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.428 [WARNING][5859] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--s9hbl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"977e9373-23c2-4b46-9d36-9bf58abbfad5", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992", Pod:"csi-node-driver-s9hbl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid4c90211001", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.428 [INFO][5859] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.428 [INFO][5859] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" iface="eth0" netns="" Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.428 [INFO][5859] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.428 [INFO][5859] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.455 [INFO][5874] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" HandleID="k8s-pod-network.ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.455 [INFO][5874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.455 [INFO][5874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.463 [WARNING][5874] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" HandleID="k8s-pod-network.ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.463 [INFO][5874] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" HandleID="k8s-pod-network.ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Workload="localhost-k8s-csi--node--driver--s9hbl-eth0" Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.464 [INFO][5874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:49.471949 containerd[1469]: 2025-09-05 00:06:49.467 [INFO][5859] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901" Sep 5 00:06:49.472475 containerd[1469]: time="2025-09-05T00:06:49.472017572Z" level=info msg="TearDown network for sandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\" successfully" Sep 5 00:06:49.556620 containerd[1469]: time="2025-09-05T00:06:49.556564232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:06:49.556812 containerd[1469]: time="2025-09-05T00:06:49.556652115Z" level=info msg="RemovePodSandbox \"ca9014d7e60343cf480a45406870650af0cfecd865e4f10b93672ec9e3112901\" returns successfully" Sep 5 00:06:49.557511 containerd[1469]: time="2025-09-05T00:06:49.557212436Z" level=info msg="StopPodSandbox for \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\"" Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.605 [WARNING][5891] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--jxkr8-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b658ca0f-5303-457c-9cf1-6dddd6c1387f", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163", Pod:"goldmane-7988f88666-jxkr8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califfb0465f1fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.605 [INFO][5891] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.605 [INFO][5891] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" iface="eth0" netns="" Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.605 [INFO][5891] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.605 [INFO][5891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.634 [INFO][5902] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" HandleID="k8s-pod-network.2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.634 [INFO][5902] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.634 [INFO][5902] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.641 [WARNING][5902] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" HandleID="k8s-pod-network.2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.641 [INFO][5902] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" HandleID="k8s-pod-network.2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.643 [INFO][5902] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:49.650813 containerd[1469]: 2025-09-05 00:06:49.646 [INFO][5891] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:49.652254 containerd[1469]: time="2025-09-05T00:06:49.650874059Z" level=info msg="TearDown network for sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\" successfully" Sep 5 00:06:49.652254 containerd[1469]: time="2025-09-05T00:06:49.650912520Z" level=info msg="StopPodSandbox for \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\" returns successfully" Sep 5 00:06:49.652254 containerd[1469]: time="2025-09-05T00:06:49.651664087Z" level=info msg="RemovePodSandbox for \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\"" Sep 5 00:06:49.652254 containerd[1469]: time="2025-09-05T00:06:49.651691518Z" level=info msg="Forcibly stopping sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\"" Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.689 [WARNING][5921] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7988f88666--jxkr8-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"b658ca0f-5303-457c-9cf1-6dddd6c1387f", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43975f4f58236092d688d00011a8c07e0998447179774b16764c11b28b240163", Pod:"goldmane-7988f88666-jxkr8", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"califfb0465f1fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.689 [INFO][5921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.689 [INFO][5921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" iface="eth0" netns="" Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.689 [INFO][5921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.689 [INFO][5921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.713 [INFO][5929] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" HandleID="k8s-pod-network.2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.713 [INFO][5929] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.713 [INFO][5929] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.720 [WARNING][5929] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" HandleID="k8s-pod-network.2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.720 [INFO][5929] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" HandleID="k8s-pod-network.2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Workload="localhost-k8s-goldmane--7988f88666--jxkr8-eth0" Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.722 [INFO][5929] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:49.728357 containerd[1469]: 2025-09-05 00:06:49.725 [INFO][5921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743" Sep 5 00:06:49.728357 containerd[1469]: time="2025-09-05T00:06:49.728310540Z" level=info msg="TearDown network for sandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\" successfully" Sep 5 00:06:49.733117 containerd[1469]: time="2025-09-05T00:06:49.733069280Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:06:49.733198 containerd[1469]: time="2025-09-05T00:06:49.733137226Z" level=info msg="RemovePodSandbox \"2cc72a2469bfc2931044e71c94f5377bb4af76e3c5cc857611b631f731e28743\" returns successfully" Sep 5 00:06:49.733803 containerd[1469]: time="2025-09-05T00:06:49.733758481Z" level=info msg="StopPodSandbox for \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\"" Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.773 [WARNING][5947] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0", GenerateName:"calico-apiserver-56d8f46b5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"43623a80-72d4-46e0-adcc-392202d1d1f2", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d8f46b5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd", Pod:"calico-apiserver-56d8f46b5d-7zqzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia27ebc39e5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.774 [INFO][5947] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.774 [INFO][5947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" iface="eth0" netns="" Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.774 [INFO][5947] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.774 [INFO][5947] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.799 [INFO][5956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" HandleID="k8s-pod-network.6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.800 [INFO][5956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.800 [INFO][5956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.807 [WARNING][5956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" HandleID="k8s-pod-network.6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.807 [INFO][5956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" HandleID="k8s-pod-network.6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.808 [INFO][5956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:49.814418 containerd[1469]: 2025-09-05 00:06:49.811 [INFO][5947] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:49.815056 containerd[1469]: time="2025-09-05T00:06:49.814461520Z" level=info msg="TearDown network for sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\" successfully" Sep 5 00:06:49.815056 containerd[1469]: time="2025-09-05T00:06:49.814492147Z" level=info msg="StopPodSandbox for \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\" returns successfully" Sep 5 00:06:49.815056 containerd[1469]: time="2025-09-05T00:06:49.815016291Z" level=info msg="RemovePodSandbox for \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\"" Sep 5 00:06:49.815130 containerd[1469]: time="2025-09-05T00:06:49.815047638Z" level=info msg="Forcibly stopping sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\"" Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.852 [WARNING][5974] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0", GenerateName:"calico-apiserver-56d8f46b5d-", Namespace:"calico-apiserver", SelfLink:"", UID:"43623a80-72d4-46e0-adcc-392202d1d1f2", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56d8f46b5d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c2b48239b0852f08cf224f5df09e91037ce581771d6b44e98b397b56cafe2dbd", Pod:"calico-apiserver-56d8f46b5d-7zqzw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia27ebc39e5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.852 [INFO][5974] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.852 [INFO][5974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" iface="eth0" netns="" Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.852 [INFO][5974] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.852 [INFO][5974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.876 [INFO][5983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" HandleID="k8s-pod-network.6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.876 [INFO][5983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.876 [INFO][5983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.882 [WARNING][5983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" HandleID="k8s-pod-network.6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.882 [INFO][5983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" HandleID="k8s-pod-network.6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Workload="localhost-k8s-calico--apiserver--56d8f46b5d--7zqzw-eth0" Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.884 [INFO][5983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:49.893401 containerd[1469]: 2025-09-05 00:06:49.887 [INFO][5974] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e" Sep 5 00:06:49.893401 containerd[1469]: time="2025-09-05T00:06:49.891165359Z" level=info msg="TearDown network for sandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\" successfully" Sep 5 00:06:49.895592 containerd[1469]: time="2025-09-05T00:06:49.895554102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:06:49.895665 containerd[1469]: time="2025-09-05T00:06:49.895631997Z" level=info msg="RemovePodSandbox \"6f5f4a0a57ba9fe7461bfac882eadba4a2d0492632c0bb3a7e3ccc7d950def8e\" returns successfully" Sep 5 00:06:49.896273 containerd[1469]: time="2025-09-05T00:06:49.896235368Z" level=info msg="StopPodSandbox for \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\"" Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.936 [WARNING][6000] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0", GenerateName:"calico-kube-controllers-59b7b569f5-", Namespace:"calico-system", SelfLink:"", UID:"ff309b76-ee57-400d-897b-26dbc2ef6eeb", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b7b569f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1", Pod:"calico-kube-controllers-59b7b569f5-wfvvx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c502d625f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.938 [INFO][6000] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.938 [INFO][6000] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" iface="eth0" netns="" Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.938 [INFO][6000] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.938 [INFO][6000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.971 [INFO][6009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" HandleID="k8s-pod-network.02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.972 [INFO][6009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.972 [INFO][6009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.977 [WARNING][6009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" HandleID="k8s-pod-network.02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.978 [INFO][6009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" HandleID="k8s-pod-network.02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.979 [INFO][6009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:49.985941 containerd[1469]: 2025-09-05 00:06:49.982 [INFO][6000] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:49.985941 containerd[1469]: time="2025-09-05T00:06:49.985903553Z" level=info msg="TearDown network for sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\" successfully" Sep 5 00:06:49.985941 containerd[1469]: time="2025-09-05T00:06:49.985930893Z" level=info msg="StopPodSandbox for \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\" returns successfully" Sep 5 00:06:49.986579 containerd[1469]: time="2025-09-05T00:06:49.986556696Z" level=info msg="RemovePodSandbox for \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\"" Sep 5 00:06:49.986623 containerd[1469]: time="2025-09-05T00:06:49.986586131Z" level=info msg="Forcibly stopping sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\"" Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.025 [WARNING][6026] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0", GenerateName:"calico-kube-controllers-59b7b569f5-", Namespace:"calico-system", SelfLink:"", UID:"ff309b76-ee57-400d-897b-26dbc2ef6eeb", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 0, 6, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59b7b569f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"068533d3b9ade13688494fbaddec2b1aed9362f2a689c9ab54bf066ff3084fb1", Pod:"calico-kube-controllers-59b7b569f5-wfvvx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7c502d625f3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.026 [INFO][6026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.026 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" iface="eth0" netns="" Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.026 [INFO][6026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.026 [INFO][6026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.050 [INFO][6035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" HandleID="k8s-pod-network.02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.050 [INFO][6035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.050 [INFO][6035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.057 [WARNING][6035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" HandleID="k8s-pod-network.02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.057 [INFO][6035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" HandleID="k8s-pod-network.02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Workload="localhost-k8s-calico--kube--controllers--59b7b569f5--wfvvx-eth0" Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.059 [INFO][6035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:06:50.066590 containerd[1469]: 2025-09-05 00:06:50.063 [INFO][6026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f" Sep 5 00:06:50.067285 containerd[1469]: time="2025-09-05T00:06:50.066637711Z" level=info msg="TearDown network for sandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\" successfully" Sep 5 00:06:50.071920 containerd[1469]: time="2025-09-05T00:06:50.071853076Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 5 00:06:50.072072 containerd[1469]: time="2025-09-05T00:06:50.071940990Z" level=info msg="RemovePodSandbox \"02bc5a940c091f8d6f1dfc384a6336e0127fce89e67cd991906342594d50b49f\" returns successfully" Sep 5 00:06:50.492567 kubelet[2553]: I0905 00:06:50.492520 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:06:50.619782 systemd[1]: run-containerd-runc-k8s.io-9af60a53cae50aa68563607ec7dd903d932aba29cb5ecbc06bab8ee760542976-runc.OPbTBa.mount: Deactivated successfully. Sep 5 00:06:50.744096 kubelet[2553]: I0905 00:06:50.743906 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-579d867b4c-lvxcz" podStartSLOduration=33.086135125 podStartE2EDuration="45.743884408s" podCreationTimestamp="2025-09-05 00:06:05 +0000 UTC" firstStartedPulling="2025-09-05 00:06:36.565848017 +0000 UTC m=+48.642809809" lastFinishedPulling="2025-09-05 00:06:49.22359731 +0000 UTC m=+61.300559092" observedRunningTime="2025-09-05 00:06:49.595397816 +0000 UTC m=+61.672359598" watchObservedRunningTime="2025-09-05 00:06:50.743884408 +0000 UTC m=+62.820846190" Sep 5 00:06:51.167009 systemd[1]: Started sshd@12-10.0.0.14:22-10.0.0.1:54496.service - OpenSSH per-connection server daemon (10.0.0.1:54496). Sep 5 00:06:51.234403 sshd[6087]: Accepted publickey for core from 10.0.0.1 port 54496 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:51.236690 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:51.241355 systemd-logind[1453]: New session 13 of user core. Sep 5 00:06:51.250145 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 00:06:51.514911 sshd[6087]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:51.523272 systemd[1]: sshd@12-10.0.0.14:22-10.0.0.1:54496.service: Deactivated successfully. Sep 5 00:06:51.525693 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 00:06:51.527806 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Sep 5 00:06:51.534341 systemd[1]: Started sshd@13-10.0.0.14:22-10.0.0.1:54508.service - OpenSSH per-connection server daemon (10.0.0.1:54508). Sep 5 00:06:51.535300 systemd-logind[1453]: Removed session 13. Sep 5 00:06:51.565333 sshd[6102]: Accepted publickey for core from 10.0.0.1 port 54508 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:51.567304 sshd[6102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:51.571730 systemd-logind[1453]: New session 14 of user core. Sep 5 00:06:51.579157 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 00:06:51.760412 sshd[6102]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:51.769778 systemd[1]: sshd@13-10.0.0.14:22-10.0.0.1:54508.service: Deactivated successfully. Sep 5 00:06:51.773919 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 00:06:51.778105 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Sep 5 00:06:51.798469 systemd[1]: Started sshd@14-10.0.0.14:22-10.0.0.1:54514.service - OpenSSH per-connection server daemon (10.0.0.1:54514). Sep 5 00:06:51.799654 systemd-logind[1453]: Removed session 14. Sep 5 00:06:51.828116 sshd[6114]: Accepted publickey for core from 10.0.0.1 port 54514 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:51.830187 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:51.834375 systemd-logind[1453]: New session 15 of user core. Sep 5 00:06:51.843274 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 00:06:52.012659 sshd[6114]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:52.017352 systemd[1]: sshd@14-10.0.0.14:22-10.0.0.1:54514.service: Deactivated successfully. Sep 5 00:06:52.020080 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 00:06:52.020897 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Sep 5 00:06:52.021993 systemd-logind[1453]: Removed session 15. Sep 5 00:06:52.745192 containerd[1469]: time="2025-09-05T00:06:52.745110404Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:52.746022 containerd[1469]: time="2025-09-05T00:06:52.745937228Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 5 00:06:52.747384 containerd[1469]: time="2025-09-05T00:06:52.747352259Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:52.750208 containerd[1469]: time="2025-09-05T00:06:52.750172864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:52.750789 containerd[1469]: time="2025-09-05T00:06:52.750761463Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.52584885s" Sep 5 00:06:52.750833 containerd[1469]: time="2025-09-05T00:06:52.750791279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 5 00:06:52.752346 containerd[1469]: time="2025-09-05T00:06:52.751958547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 00:06:52.754583 containerd[1469]: time="2025-09-05T00:06:52.754520930Z" level=info msg="CreateContainer within sandbox \"9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 00:06:52.771790 containerd[1469]: time="2025-09-05T00:06:52.771736387Z" level=info msg="CreateContainer within sandbox \"9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0552b779e540383a01dbe10951dd78089dd505c824b63d8854a39c4981801193\"" Sep 5 00:06:52.772368 containerd[1469]: time="2025-09-05T00:06:52.772343881Z" level=info msg="StartContainer for \"0552b779e540383a01dbe10951dd78089dd505c824b63d8854a39c4981801193\"" Sep 5 00:06:52.821170 systemd[1]: Started cri-containerd-0552b779e540383a01dbe10951dd78089dd505c824b63d8854a39c4981801193.scope - libcontainer container 0552b779e540383a01dbe10951dd78089dd505c824b63d8854a39c4981801193. Sep 5 00:06:52.858666 containerd[1469]: time="2025-09-05T00:06:52.857227272Z" level=info msg="StartContainer for \"0552b779e540383a01dbe10951dd78089dd505c824b63d8854a39c4981801193\" returns successfully" Sep 5 00:06:55.850578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3351085906.mount: Deactivated successfully. Sep 5 00:06:55.990289 containerd[1469]: time="2025-09-05T00:06:55.990228167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:55.990755 containerd[1469]: time="2025-09-05T00:06:55.974076105Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 5 00:06:55.990755 containerd[1469]: time="2025-09-05T00:06:55.978766778Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.226741897s" Sep 5 00:06:55.990755 containerd[1469]: time="2025-09-05T00:06:55.990376074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 5 00:06:55.995671 containerd[1469]: time="2025-09-05T00:06:55.993440549Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:55.995671 containerd[1469]: time="2025-09-05T00:06:55.994081621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:55.995671 containerd[1469]: time="2025-09-05T00:06:55.994770132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 00:06:55.996051 containerd[1469]: time="2025-09-05T00:06:55.995947790Z" level=info msg="CreateContainer within sandbox \"3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 00:06:56.034352 containerd[1469]: time="2025-09-05T00:06:56.034287343Z" level=info msg="CreateContainer within sandbox \"3fecf2535b14a9862d771f1f2bcec2d5ff73ba0d5cd5a1ed6ceb674423e582d7\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"88ed4c9f427285101574c0b5bc89d4d528cbe99c3fb312015656fecc4bee0e9a\"" Sep 5 00:06:56.035023 containerd[1469]: time="2025-09-05T00:06:56.034955748Z" level=info msg="StartContainer for \"88ed4c9f427285101574c0b5bc89d4d528cbe99c3fb312015656fecc4bee0e9a\"" Sep 5 00:06:56.100215 systemd[1]: Started cri-containerd-88ed4c9f427285101574c0b5bc89d4d528cbe99c3fb312015656fecc4bee0e9a.scope - libcontainer container 88ed4c9f427285101574c0b5bc89d4d528cbe99c3fb312015656fecc4bee0e9a. Sep 5 00:06:56.145337 containerd[1469]: time="2025-09-05T00:06:56.145180086Z" level=info msg="StartContainer for \"88ed4c9f427285101574c0b5bc89d4d528cbe99c3fb312015656fecc4bee0e9a\" returns successfully" Sep 5 00:06:57.032454 systemd[1]: Started sshd@15-10.0.0.14:22-10.0.0.1:54522.service - OpenSSH per-connection server daemon (10.0.0.1:54522). Sep 5 00:06:57.081665 sshd[6227]: Accepted publickey for core from 10.0.0.1 port 54522 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:06:57.083715 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:06:57.088456 systemd-logind[1453]: New session 16 of user core. Sep 5 00:06:57.096116 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 00:06:57.285200 sshd[6227]: pam_unix(sshd:session): session closed for user core Sep 5 00:06:57.290680 systemd[1]: sshd@15-10.0.0.14:22-10.0.0.1:54522.service: Deactivated successfully. Sep 5 00:06:57.293752 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 00:06:57.295414 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Sep 5 00:06:57.296794 systemd-logind[1453]: Removed session 16. Sep 5 00:06:58.591890 containerd[1469]: time="2025-09-05T00:06:58.591818056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:58.592902 containerd[1469]: time="2025-09-05T00:06:58.592850950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 5 00:06:58.594715 containerd[1469]: time="2025-09-05T00:06:58.594654694Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:58.648667 containerd[1469]: time="2025-09-05T00:06:58.648579270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 00:06:58.649619 containerd[1469]: time="2025-09-05T00:06:58.649567409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.654755088s" Sep 5 00:06:58.649696 containerd[1469]: time="2025-09-05T00:06:58.649625790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 5 00:06:58.653192 containerd[1469]: time="2025-09-05T00:06:58.653148489Z" level=info msg="CreateContainer within sandbox \"9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 00:06:58.672182 containerd[1469]: time="2025-09-05T00:06:58.672125267Z" level=info msg="CreateContainer within sandbox \"9f83f6164e4429322bf35483b3553847ccfed7bfe02d7c2f54763be9b1edd992\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"25f9ac1a4efc5074aab2e33a4b08b6a794e12aa8a5f5bce1a7ebce01ee715242\"" Sep 5 00:06:58.672965 containerd[1469]: time="2025-09-05T00:06:58.672780200Z" level=info msg="StartContainer for \"25f9ac1a4efc5074aab2e33a4b08b6a794e12aa8a5f5bce1a7ebce01ee715242\"" Sep 5 00:06:58.737299 systemd[1]: Started cri-containerd-25f9ac1a4efc5074aab2e33a4b08b6a794e12aa8a5f5bce1a7ebce01ee715242.scope - libcontainer container 25f9ac1a4efc5074aab2e33a4b08b6a794e12aa8a5f5bce1a7ebce01ee715242. Sep 5 00:06:58.790247 containerd[1469]: time="2025-09-05T00:06:58.790184474Z" level=info msg="StartContainer for \"25f9ac1a4efc5074aab2e33a4b08b6a794e12aa8a5f5bce1a7ebce01ee715242\" returns successfully" Sep 5 00:06:59.295771 kubelet[2553]: I0905 00:06:59.295698 2553 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 00:06:59.302289 kubelet[2553]: I0905 00:06:59.302269 2553 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 00:06:59.696132 kubelet[2553]: I0905 00:06:59.695929 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-665795d94b-nh8zn" podStartSLOduration=4.701319854 podStartE2EDuration="24.695905529s" podCreationTimestamp="2025-09-05 00:06:35 +0000 UTC" firstStartedPulling="2025-09-05 00:06:35.999324113 +0000 UTC m=+48.076285895" lastFinishedPulling="2025-09-05 00:06:55.993909788 +0000 UTC m=+68.070871570" observedRunningTime="2025-09-05 00:06:56.523143598 +0000 UTC m=+68.600105380" watchObservedRunningTime="2025-09-05 00:06:59.695905529 +0000 UTC m=+71.772867311" Sep 5 00:06:59.696315 kubelet[2553]: I0905 00:06:59.696245 2553 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s9hbl" podStartSLOduration=41.623962476 podStartE2EDuration="52.696234117s" podCreationTimestamp="2025-09-05 00:06:07 +0000 UTC" firstStartedPulling="2025-09-05 00:06:47.578301953 +0000 UTC m=+59.655263735" lastFinishedPulling="2025-09-05 00:06:58.650573584 +0000 UTC m=+70.727535376" observedRunningTime="2025-09-05 00:06:59.695651139 +0000 UTC m=+71.772612921" watchObservedRunningTime="2025-09-05 00:06:59.696234117 +0000 UTC m=+71.773196030" Sep 5 00:07:02.300141 systemd[1]: Started sshd@16-10.0.0.14:22-10.0.0.1:41188.service - OpenSSH per-connection server daemon (10.0.0.1:41188). Sep 5 00:07:02.349625 sshd[6289]: Accepted publickey for core from 10.0.0.1 port 41188 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:02.352027 sshd[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:02.356272 systemd-logind[1453]: New session 17 of user core. Sep 5 00:07:02.366134 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 00:07:02.634775 sshd[6289]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:02.645795 systemd[1]: sshd@16-10.0.0.14:22-10.0.0.1:41188.service: Deactivated successfully. Sep 5 00:07:02.650788 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 00:07:02.652275 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Sep 5 00:07:02.653821 systemd-logind[1453]: Removed session 17. Sep 5 00:07:03.064898 kubelet[2553]: E0905 00:07:03.064810 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:06.860931 kubelet[2553]: I0905 00:07:06.860859 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:07:07.646844 systemd[1]: Started sshd@17-10.0.0.14:22-10.0.0.1:41202.service - OpenSSH per-connection server daemon (10.0.0.1:41202). Sep 5 00:07:07.700757 sshd[6349]: Accepted publickey for core from 10.0.0.1 port 41202 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:07.702761 sshd[6349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:07.707269 systemd-logind[1453]: New session 18 of user core. Sep 5 00:07:07.715135 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 00:07:07.932602 sshd[6349]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:07.939404 systemd[1]: sshd@17-10.0.0.14:22-10.0.0.1:41202.service: Deactivated successfully. Sep 5 00:07:07.943953 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 00:07:07.944952 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Sep 5 00:07:07.946890 systemd-logind[1453]: Removed session 18. Sep 5 00:07:08.557940 kubelet[2553]: I0905 00:07:08.557884 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:07:12.079056 kubelet[2553]: E0905 00:07:12.078973 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:12.952792 systemd[1]: Started sshd@18-10.0.0.14:22-10.0.0.1:60992.service - OpenSSH per-connection server daemon (10.0.0.1:60992). Sep 5 00:07:13.000347 sshd[6366]: Accepted publickey for core from 10.0.0.1 port 60992 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:13.002443 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:13.009006 systemd-logind[1453]: New session 19 of user core. Sep 5 00:07:13.018389 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 00:07:13.613681 sshd[6366]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:13.625018 systemd[1]: sshd@18-10.0.0.14:22-10.0.0.1:60992.service: Deactivated successfully. Sep 5 00:07:13.628460 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 00:07:13.634156 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Sep 5 00:07:13.642903 systemd[1]: Started sshd@19-10.0.0.14:22-10.0.0.1:32768.service - OpenSSH per-connection server daemon (10.0.0.1:32768). Sep 5 00:07:13.645143 systemd-logind[1453]: Removed session 19. Sep 5 00:07:13.671006 sshd[6380]: Accepted publickey for core from 10.0.0.1 port 32768 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:13.673089 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:13.679760 systemd-logind[1453]: New session 20 of user core. Sep 5 00:07:13.689166 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 5 00:07:14.000158 sshd[6380]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:14.009442 systemd[1]: sshd@19-10.0.0.14:22-10.0.0.1:32768.service: Deactivated successfully. Sep 5 00:07:14.011935 systemd[1]: session-20.scope: Deactivated successfully. Sep 5 00:07:14.013776 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Sep 5 00:07:14.023397 systemd[1]: Started sshd@20-10.0.0.14:22-10.0.0.1:32776.service - OpenSSH per-connection server daemon (10.0.0.1:32776). Sep 5 00:07:14.024642 systemd-logind[1453]: Removed session 20. Sep 5 00:07:14.064305 sshd[6393]: Accepted publickey for core from 10.0.0.1 port 32776 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:14.066276 sshd[6393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:14.071055 systemd-logind[1453]: New session 21 of user core. Sep 5 00:07:14.078154 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 5 00:07:15.811415 sshd[6393]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:15.819384 systemd[1]: sshd@20-10.0.0.14:22-10.0.0.1:32776.service: Deactivated successfully. Sep 5 00:07:15.822631 systemd[1]: session-21.scope: Deactivated successfully. Sep 5 00:07:15.827454 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Sep 5 00:07:15.838364 systemd[1]: Started sshd@21-10.0.0.14:22-10.0.0.1:32778.service - OpenSSH per-connection server daemon (10.0.0.1:32778). Sep 5 00:07:15.840989 systemd-logind[1453]: Removed session 21. Sep 5 00:07:15.895674 sshd[6427]: Accepted publickey for core from 10.0.0.1 port 32778 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:15.898876 sshd[6427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:15.904310 systemd-logind[1453]: New session 22 of user core. Sep 5 00:07:15.909142 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 5 00:07:16.059840 kubelet[2553]: E0905 00:07:16.059800 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:16.390186 sshd[6427]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:16.402786 systemd[1]: sshd@21-10.0.0.14:22-10.0.0.1:32778.service: Deactivated successfully. Sep 5 00:07:16.405259 systemd[1]: session-22.scope: Deactivated successfully. Sep 5 00:07:16.406917 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Sep 5 00:07:16.413345 systemd[1]: Started sshd@22-10.0.0.14:22-10.0.0.1:32792.service - OpenSSH per-connection server daemon (10.0.0.1:32792). Sep 5 00:07:16.414480 systemd-logind[1453]: Removed session 22. Sep 5 00:07:16.454183 sshd[6447]: Accepted publickey for core from 10.0.0.1 port 32792 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:16.456176 sshd[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:16.460691 systemd-logind[1453]: New session 23 of user core. Sep 5 00:07:16.468108 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 5 00:07:16.622292 sshd[6447]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:16.629409 systemd[1]: sshd@22-10.0.0.14:22-10.0.0.1:32792.service: Deactivated successfully. Sep 5 00:07:16.632063 systemd[1]: session-23.scope: Deactivated successfully. Sep 5 00:07:16.632864 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Sep 5 00:07:16.633952 systemd-logind[1453]: Removed session 23. Sep 5 00:07:20.473070 kubelet[2553]: I0905 00:07:20.472593 2553 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 00:07:21.072060 containerd[1469]: time="2025-09-05T00:07:21.071887515Z" level=info msg="StopContainer for \"48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f\" with timeout 30 (s)" Sep 5 00:07:21.075525 containerd[1469]: time="2025-09-05T00:07:21.075400702Z" level=info msg="Stop container \"48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f\" with signal terminated" Sep 5 00:07:21.136253 systemd[1]: cri-containerd-48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f.scope: Deactivated successfully. Sep 5 00:07:21.173822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f-rootfs.mount: Deactivated successfully. Sep 5 00:07:21.192307 containerd[1469]: time="2025-09-05T00:07:21.171740868Z" level=info msg="shim disconnected" id=48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f namespace=k8s.io Sep 5 00:07:21.199897 containerd[1469]: time="2025-09-05T00:07:21.199791815Z" level=warning msg="cleaning up after shim disconnected" id=48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f namespace=k8s.io Sep 5 00:07:21.199897 containerd[1469]: time="2025-09-05T00:07:21.199859706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:07:21.254200 containerd[1469]: time="2025-09-05T00:07:21.253844797Z" level=info msg="StopContainer for \"48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f\" returns successfully" Sep 5 00:07:21.258028 containerd[1469]: time="2025-09-05T00:07:21.257971180Z" level=info msg="StopPodSandbox for \"5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d\"" Sep 5 00:07:21.263371 containerd[1469]: time="2025-09-05T00:07:21.263307719Z" level=info msg="Container to stop \"48de41ccc947f1112b220e8bcf83b17fe8c06c600204a17fb2975e64abf38f0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 5 00:07:21.268455 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d-shm.mount: Deactivated successfully. Sep 5 00:07:21.274193 systemd[1]: cri-containerd-5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d.scope: Deactivated successfully. Sep 5 00:07:21.306343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d-rootfs.mount: Deactivated successfully. Sep 5 00:07:21.310284 containerd[1469]: time="2025-09-05T00:07:21.300381113Z" level=info msg="shim disconnected" id=5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d namespace=k8s.io Sep 5 00:07:21.310284 containerd[1469]: time="2025-09-05T00:07:21.310181342Z" level=warning msg="cleaning up after shim disconnected" id=5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d namespace=k8s.io Sep 5 00:07:21.310284 containerd[1469]: time="2025-09-05T00:07:21.310192383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 5 00:07:21.558585 systemd-networkd[1402]: calieddfabaf632: Link DOWN Sep 5 00:07:21.558596 systemd-networkd[1402]: calieddfabaf632: Lost carrier Sep 5 00:07:21.580360 kubelet[2553]: I0905 00:07:21.580306 2553 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Sep 5 00:07:21.642312 systemd[1]: Started sshd@23-10.0.0.14:22-10.0.0.1:42164.service - OpenSSH per-connection server daemon (10.0.0.1:42164). Sep 5 00:07:21.674054 sshd[6605]: Accepted publickey for core from 10.0.0.1 port 42164 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:21.674998 sshd[6605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:21.680031 systemd-logind[1453]: New session 24 of user core. Sep 5 00:07:21.686107 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 5 00:07:21.971226 sshd[6605]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:21.975592 systemd[1]: sshd@23-10.0.0.14:22-10.0.0.1:42164.service: Deactivated successfully. Sep 5 00:07:21.978258 systemd[1]: session-24.scope: Deactivated successfully. Sep 5 00:07:21.978931 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. Sep 5 00:07:21.980006 systemd-logind[1453]: Removed session 24. Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:21.548 [INFO][6582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:21.551 [INFO][6582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" iface="eth0" netns="/var/run/netns/cni-829b3d84-9143-634e-0ff4-048750a33b24" Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:21.551 [INFO][6582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" iface="eth0" netns="/var/run/netns/cni-829b3d84-9143-634e-0ff4-048750a33b24" Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:21.567 [INFO][6582] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" after=16.040423ms iface="eth0" netns="/var/run/netns/cni-829b3d84-9143-634e-0ff4-048750a33b24" Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:21.567 [INFO][6582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:21.567 [INFO][6582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:21.647 [INFO][6596] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" HandleID="k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:21.648 [INFO][6596] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:21.648 [INFO][6596] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:22.471 [INFO][6596] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" HandleID="k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:22.472 [INFO][6596] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" HandleID="k8s-pod-network.5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Workload="localhost-k8s-calico--apiserver--579d867b4c--ljpgm-eth0" Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:22.476 [INFO][6596] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 00:07:22.488269 containerd[1469]: 2025-09-05 00:07:22.482 [INFO][6582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d" Sep 5 00:07:22.493244 containerd[1469]: time="2025-09-05T00:07:22.493196381Z" level=info msg="TearDown network for sandbox \"5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d\" successfully" Sep 5 00:07:22.493309 containerd[1469]: time="2025-09-05T00:07:22.493243472Z" level=info msg="StopPodSandbox for \"5e1bf355c9e4d9870b289f45d4ac8f92f3a76366b12c1cdd636c3550e0b4529d\" returns successfully" Sep 5 00:07:22.498364 systemd[1]: run-netns-cni\x2d829b3d84\x2d9143\x2d634e\x2d0ff4\x2d048750a33b24.mount: Deactivated successfully. Sep 5 00:07:22.593882 kubelet[2553]: I0905 00:07:22.593809 2553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148-calico-apiserver-certs\") pod \"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148\" (UID: \"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148\") " Sep 5 00:07:22.593882 kubelet[2553]: I0905 00:07:22.593898 2553 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrxqv\" (UniqueName: \"kubernetes.io/projected/28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148-kube-api-access-wrxqv\") pod \"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148\" (UID: \"28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148\") " Sep 5 00:07:22.623441 systemd[1]: var-lib-kubelet-pods-28bf50aa\x2dc0b3\x2d4b34\x2da1a5\x2d10a9dcc0d148-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwrxqv.mount: Deactivated successfully. Sep 5 00:07:22.627454 kubelet[2553]: I0905 00:07:22.624620 2553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148-kube-api-access-wrxqv" (OuterVolumeSpecName: "kube-api-access-wrxqv") pod "28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148" (UID: "28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148"). InnerVolumeSpecName "kube-api-access-wrxqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 5 00:07:22.630232 systemd[1]: var-lib-kubelet-pods-28bf50aa\x2dc0b3\x2d4b34\x2da1a5\x2d10a9dcc0d148-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 5 00:07:22.658590 kubelet[2553]: I0905 00:07:22.657560 2553 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148" (UID: "28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 5 00:07:22.694304 kubelet[2553]: I0905 00:07:22.694237 2553 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:22.694304 kubelet[2553]: I0905 00:07:22.694289 2553 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrxqv\" (UniqueName: \"kubernetes.io/projected/28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148-kube-api-access-wrxqv\") on node \"localhost\" DevicePath \"\"" Sep 5 00:07:22.939040 systemd[1]: Removed slice kubepods-besteffort-pod28bf50aa_c0b3_4b34_a1a5_10a9dcc0d148.slice - libcontainer container kubepods-besteffort-pod28bf50aa_c0b3_4b34_a1a5_10a9dcc0d148.slice. Sep 5 00:07:23.057471 kubelet[2553]: E0905 00:07:23.057416 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:24.061312 kubelet[2553]: I0905 00:07:24.061246 2553 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148" path="/var/lib/kubelet/pods/28bf50aa-c0b3-4b34-a1a5-10a9dcc0d148/volumes" Sep 5 00:07:26.988181 systemd[1]: Started sshd@24-10.0.0.14:22-10.0.0.1:42178.service - OpenSSH per-connection server daemon (10.0.0.1:42178). Sep 5 00:07:27.048355 sshd[6632]: Accepted publickey for core from 10.0.0.1 port 42178 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:27.050649 sshd[6632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:27.055601 systemd-logind[1453]: New session 25 of user core. Sep 5 00:07:27.065227 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 5 00:07:27.262736 sshd[6632]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:27.267497 systemd[1]: sshd@24-10.0.0.14:22-10.0.0.1:42178.service: Deactivated successfully. Sep 5 00:07:27.270192 systemd[1]: session-25.scope: Deactivated successfully. Sep 5 00:07:27.271029 systemd-logind[1453]: Session 25 logged out. Waiting for processes to exit. Sep 5 00:07:27.272065 systemd-logind[1453]: Removed session 25. Sep 5 00:07:32.275456 systemd[1]: Started sshd@25-10.0.0.14:22-10.0.0.1:34314.service - OpenSSH per-connection server daemon (10.0.0.1:34314). Sep 5 00:07:32.434071 sshd[6650]: Accepted publickey for core from 10.0.0.1 port 34314 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:32.436101 sshd[6650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:32.441398 systemd-logind[1453]: New session 26 of user core. Sep 5 00:07:32.444268 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 5 00:07:32.574273 sshd[6650]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:32.585463 systemd-logind[1453]: Session 26 logged out. Waiting for processes to exit. Sep 5 00:07:32.588179 systemd[1]: sshd@25-10.0.0.14:22-10.0.0.1:34314.service: Deactivated successfully. Sep 5 00:07:32.595825 systemd[1]: session-26.scope: Deactivated successfully. Sep 5 00:07:32.597525 systemd-logind[1453]: Removed session 26. Sep 5 00:07:34.057970 kubelet[2553]: E0905 00:07:34.057540 2553 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 5 00:07:37.603714 systemd[1]: Started sshd@26-10.0.0.14:22-10.0.0.1:34320.service - OpenSSH per-connection server daemon (10.0.0.1:34320). Sep 5 00:07:37.632866 sshd[6686]: Accepted publickey for core from 10.0.0.1 port 34320 ssh2: RSA SHA256:BK2KfYWcm4ejKzYRnzJitcOItG4HW08lduLIya09DLM Sep 5 00:07:37.635067 sshd[6686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 00:07:37.640625 systemd-logind[1453]: New session 27 of user core. Sep 5 00:07:37.651417 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 5 00:07:37.776694 sshd[6686]: pam_unix(sshd:session): session closed for user core Sep 5 00:07:37.780629 systemd-logind[1453]: Session 27 logged out. Waiting for processes to exit. Sep 5 00:07:37.781115 systemd[1]: sshd@26-10.0.0.14:22-10.0.0.1:34320.service: Deactivated successfully. Sep 5 00:07:37.784530 systemd[1]: session-27.scope: Deactivated successfully. Sep 5 00:07:37.788936 systemd-logind[1453]: Removed session 27.