Sep 10 00:39:39.993721 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Sep 9 22:56:44 -00 2025 Sep 10 00:39:39.993753 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:39:39.993769 kernel: BIOS-provided physical RAM map: Sep 10 00:39:39.993778 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 10 00:39:39.993786 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 10 00:39:39.993794 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 10 00:39:39.993805 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 10 00:39:39.993813 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 10 00:39:39.993822 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 10 00:39:39.993831 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 10 00:39:39.993844 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 10 00:39:39.993853 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 10 00:39:39.993866 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 10 00:39:39.993876 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 10 00:39:39.993890 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 10 00:39:39.993900 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 10 00:39:39.993914 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 10 00:39:39.993923 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 10 00:39:39.993945 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 10 00:39:39.993954 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 10 00:39:39.993963 kernel: NX (Execute Disable) protection: active Sep 10 00:39:39.993972 kernel: APIC: Static calls initialized Sep 10 00:39:39.993982 kernel: efi: EFI v2.7 by EDK II Sep 10 00:39:39.993991 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 10 00:39:39.994001 kernel: SMBIOS 2.8 present. Sep 10 00:39:39.994010 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 10 00:39:39.994019 kernel: Hypervisor detected: KVM Sep 10 00:39:39.994033 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 10 00:39:39.994042 kernel: kvm-clock: using sched offset of 5401509022 cycles Sep 10 00:39:39.994052 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 10 00:39:39.994063 kernel: tsc: Detected 2794.748 MHz processor Sep 10 00:39:39.994073 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 10 00:39:39.994083 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 10 00:39:39.994092 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 10 00:39:39.994102 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 10 00:39:39.994112 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 10 00:39:39.994126 kernel: Using GB pages for direct mapping Sep 10 00:39:39.994135 kernel: Secure boot disabled Sep 10 00:39:39.994145 kernel: ACPI: Early table checksum verification disabled Sep 10 00:39:39.994155 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 10 00:39:39.994170 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 10 00:39:39.994180 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:39:39.994218 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:39:39.994235 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 10 00:39:39.994245 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:39:39.994260 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:39:39.994270 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:39:39.994280 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 00:39:39.994290 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 10 00:39:39.994300 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 10 00:39:39.994314 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 10 00:39:39.994324 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 10 00:39:39.994334 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 10 00:39:39.994344 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 10 00:39:39.994355 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 10 00:39:39.994365 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 10 00:39:39.994375 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 10 00:39:39.994385 kernel: No NUMA configuration found Sep 10 00:39:39.994398 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 10 00:39:39.994413 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 10 00:39:39.994424 kernel: Zone ranges: Sep 10 00:39:39.994434 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 10 00:39:39.994444 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 10 00:39:39.994454 kernel: Normal empty Sep 10 00:39:39.994465 kernel: Movable zone start for each node Sep 10 00:39:39.994474 kernel: Early memory node ranges Sep 10 00:39:39.994484 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 10 00:39:39.994494 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 10 00:39:39.994505 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 10 00:39:39.994519 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 10 00:39:39.994529 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 10 00:39:39.994539 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 10 00:39:39.994553 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 10 00:39:39.994564 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:39:39.994574 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 10 00:39:39.994584 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 10 00:39:39.994594 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 10 00:39:39.994604 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 10 00:39:39.994619 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 10 00:39:39.994629 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 10 00:39:39.994639 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 10 00:39:39.994649 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 10 00:39:39.994659 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 10 00:39:39.994669 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 10 00:39:39.994679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 10 00:39:39.994689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 10 00:39:39.994700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 10 00:39:39.994714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 10 00:39:39.994724 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 10 00:39:39.994735 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 10 00:39:39.994745 kernel: TSC deadline timer available Sep 10 00:39:39.994755 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 10 00:39:39.994766 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 10 00:39:39.994776 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 10 00:39:39.994787 kernel: kvm-guest: setup PV sched yield Sep 10 00:39:39.994797 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 10 00:39:39.994811 kernel: Booting paravirtualized kernel on KVM Sep 10 00:39:39.994821 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 10 00:39:39.994831 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 10 00:39:39.994841 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 10 00:39:39.994851 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 10 00:39:39.994861 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 10 00:39:39.994871 kernel: kvm-guest: PV spinlocks enabled Sep 10 00:39:39.994881 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 10 00:39:39.994892 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:39:39.994911 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 00:39:39.994922 kernel: random: crng init done Sep 10 00:39:39.994944 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 00:39:39.994954 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 00:39:39.994964 kernel: Fallback order for Node 0: 0 Sep 10 00:39:39.994974 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 10 00:39:39.994984 kernel: Policy zone: DMA32 Sep 10 00:39:39.994994 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 00:39:39.995009 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 166140K reserved, 0K cma-reserved) Sep 10 00:39:39.995019 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 00:39:39.995029 kernel: ftrace: allocating 37969 entries in 149 pages Sep 10 00:39:39.995040 kernel: ftrace: allocated 149 pages with 4 groups Sep 10 00:39:39.995050 kernel: Dynamic Preempt: voluntary Sep 10 00:39:39.995071 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 00:39:39.995090 kernel: rcu: RCU event tracing is enabled. Sep 10 00:39:39.995102 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 00:39:39.995112 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 00:39:39.995123 kernel: Rude variant of Tasks RCU enabled. Sep 10 00:39:39.995134 kernel: Tracing variant of Tasks RCU enabled. Sep 10 00:39:39.995144 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 00:39:39.995159 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 00:39:39.995170 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 10 00:39:39.995184 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 00:39:39.995300 kernel: Console: colour dummy device 80x25 Sep 10 00:39:39.995312 kernel: printk: console [ttyS0] enabled Sep 10 00:39:39.995327 kernel: ACPI: Core revision 20230628 Sep 10 00:39:39.995339 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 10 00:39:39.995350 kernel: APIC: Switch to symmetric I/O mode setup Sep 10 00:39:39.995360 kernel: x2apic enabled Sep 10 00:39:39.995371 kernel: APIC: Switched APIC routing to: physical x2apic Sep 10 00:39:39.995383 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 10 00:39:39.995393 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 10 00:39:39.995404 kernel: kvm-guest: setup PV IPIs Sep 10 00:39:39.995414 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 10 00:39:39.995429 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 10 00:39:39.995440 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 10 00:39:39.995451 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 10 00:39:39.995461 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 10 00:39:39.995471 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 10 00:39:39.995482 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 10 00:39:39.995492 kernel: Spectre V2 : Mitigation: Retpolines Sep 10 00:39:39.995502 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 10 00:39:39.995513 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 10 00:39:39.995527 kernel: active return thunk: retbleed_return_thunk Sep 10 00:39:39.995538 kernel: RETBleed: Mitigation: untrained return thunk Sep 10 00:39:39.995549 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 10 00:39:39.995559 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 10 00:39:39.995574 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 10 00:39:39.995586 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 10 00:39:39.995597 kernel: active return thunk: srso_return_thunk Sep 10 00:39:39.995607 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 10 00:39:39.995623 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 10 00:39:39.995634 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 10 00:39:39.995644 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 10 00:39:39.995655 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 10 00:39:39.995665 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 10 00:39:39.995676 kernel: Freeing SMP alternatives memory: 32K Sep 10 00:39:39.995686 kernel: pid_max: default: 32768 minimum: 301 Sep 10 00:39:39.995696 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 10 00:39:39.995706 kernel: landlock: Up and running. Sep 10 00:39:39.995720 kernel: SELinux: Initializing. Sep 10 00:39:39.995731 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:39:39.995742 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 00:39:39.995753 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 10 00:39:39.995764 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:39:39.995774 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:39:39.995785 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 00:39:39.995796 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 10 00:39:39.995807 kernel: ... version: 0 Sep 10 00:39:39.995821 kernel: ... bit width: 48 Sep 10 00:39:39.995831 kernel: ... generic registers: 6 Sep 10 00:39:39.995842 kernel: ... value mask: 0000ffffffffffff Sep 10 00:39:39.995852 kernel: ... max period: 00007fffffffffff Sep 10 00:39:39.995862 kernel: ... fixed-purpose events: 0 Sep 10 00:39:39.995873 kernel: ... event mask: 000000000000003f Sep 10 00:39:39.995884 kernel: signal: max sigframe size: 1776 Sep 10 00:39:39.995894 kernel: rcu: Hierarchical SRCU implementation. Sep 10 00:39:39.995906 kernel: rcu: Max phase no-delay instances is 400. Sep 10 00:39:39.995920 kernel: smp: Bringing up secondary CPUs ... Sep 10 00:39:39.995942 kernel: smpboot: x86: Booting SMP configuration: Sep 10 00:39:39.995953 kernel: .... node #0, CPUs: #1 #2 #3 Sep 10 00:39:39.995964 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 00:39:39.995978 kernel: smpboot: Max logical packages: 1 Sep 10 00:39:39.995989 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 10 00:39:39.996000 kernel: devtmpfs: initialized Sep 10 00:39:39.996011 kernel: x86/mm: Memory block size: 128MB Sep 10 00:39:39.996022 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 10 00:39:39.996033 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 10 00:39:39.996048 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 10 00:39:39.996058 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 10 00:39:39.996069 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 10 00:39:39.996080 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 00:39:39.996091 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 00:39:39.996102 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 00:39:39.996113 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 00:39:39.996124 kernel: audit: initializing netlink subsys (disabled) Sep 10 00:39:39.996138 kernel: audit: type=2000 audit(1757464778.619:1): state=initialized audit_enabled=0 res=1 Sep 10 00:39:39.996149 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 00:39:39.996160 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 10 00:39:39.996171 kernel: cpuidle: using governor menu Sep 10 00:39:39.996182 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 00:39:39.996210 kernel: dca service started, version 1.12.1 Sep 10 00:39:39.996225 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 10 00:39:39.996238 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 10 00:39:39.996252 kernel: PCI: Using configuration type 1 for base access Sep 10 00:39:39.996271 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 10 00:39:39.996285 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 00:39:39.996298 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 00:39:39.996329 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 00:39:39.996343 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 00:39:39.996356 kernel: ACPI: Added _OSI(Module Device) Sep 10 00:39:39.996370 kernel: ACPI: Added _OSI(Processor Device) Sep 10 00:39:39.996384 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 00:39:39.996398 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 00:39:39.996416 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 10 00:39:39.996427 kernel: ACPI: Interpreter enabled Sep 10 00:39:39.996438 kernel: ACPI: PM: (supports S0 S3 S5) Sep 10 00:39:39.996449 kernel: ACPI: Using IOAPIC for interrupt routing Sep 10 00:39:39.996460 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 10 00:39:39.996471 kernel: PCI: Using E820 reservations for host bridge windows Sep 10 00:39:39.996482 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 10 00:39:39.996493 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 00:39:39.996773 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 00:39:39.996980 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 10 00:39:39.997170 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 10 00:39:39.997283 kernel: PCI host bridge to bus 0000:00 Sep 10 00:39:39.999895 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 10 00:39:40.000069 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 10 00:39:40.000245 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 10 00:39:40.000418 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 10 00:39:40.000566 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 10 00:39:40.000711 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 10 00:39:40.000854 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 00:39:40.001065 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 10 00:39:40.001317 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 10 00:39:40.001480 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 10 00:39:40.001653 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 10 00:39:40.001820 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 10 00:39:40.001996 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 10 00:39:40.002159 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 10 00:39:40.002413 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 10 00:39:40.002578 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 10 00:39:40.004249 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 10 00:39:40.004461 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 10 00:39:40.004615 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 10 00:39:40.004747 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 10 00:39:40.004876 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 10 00:39:40.005017 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 10 00:39:40.005166 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 10 00:39:40.005473 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 10 00:39:40.005644 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 10 00:39:40.005810 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 10 00:39:40.005984 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 10 00:39:40.006166 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 10 00:39:40.009766 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 10 00:39:40.009979 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 10 00:39:40.010119 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 10 00:39:40.010328 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 10 00:39:40.010559 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 10 00:39:40.010736 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 10 00:39:40.010754 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 10 00:39:40.010765 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 10 00:39:40.010776 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 10 00:39:40.010793 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 10 00:39:40.010805 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 10 00:39:40.010816 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 10 00:39:40.010827 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 10 00:39:40.010838 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 10 00:39:40.010849 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 10 00:39:40.010860 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 10 00:39:40.010871 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 10 00:39:40.010882 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 10 00:39:40.010897 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 10 00:39:40.010908 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 10 00:39:40.010919 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 10 00:39:40.010943 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 10 00:39:40.010954 kernel: iommu: Default domain type: Translated Sep 10 00:39:40.010965 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 10 00:39:40.010976 kernel: efivars: Registered efivars operations Sep 10 00:39:40.010986 kernel: PCI: Using ACPI for IRQ routing Sep 10 00:39:40.010997 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 10 00:39:40.011008 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 10 00:39:40.011024 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 10 00:39:40.011035 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 10 00:39:40.011045 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 10 00:39:40.011329 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 10 00:39:40.011533 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 10 00:39:40.011709 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 10 00:39:40.011726 kernel: vgaarb: loaded Sep 10 00:39:40.011737 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 10 00:39:40.011768 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 10 00:39:40.011796 kernel: clocksource: Switched to clocksource kvm-clock Sep 10 00:39:40.011819 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 00:39:40.011845 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 00:39:40.011872 kernel: pnp: PnP ACPI init Sep 10 00:39:40.012107 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 10 00:39:40.012127 kernel: pnp: PnP ACPI: found 6 devices Sep 10 00:39:40.012138 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 10 00:39:40.012155 kernel: NET: Registered PF_INET protocol family Sep 10 00:39:40.012166 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 00:39:40.012180 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 00:39:40.012222 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 00:39:40.012236 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 00:39:40.012250 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 00:39:40.012264 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 00:39:40.012277 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:39:40.012291 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 00:39:40.012308 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 00:39:40.012318 kernel: NET: Registered PF_XDP protocol family Sep 10 00:39:40.012499 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 10 00:39:40.012673 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 10 00:39:40.012829 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 10 00:39:40.012994 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 10 00:39:40.013149 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 10 00:39:40.013370 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 10 00:39:40.013551 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 10 00:39:40.013765 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 10 00:39:40.013782 kernel: PCI: CLS 0 bytes, default 64 Sep 10 00:39:40.013793 kernel: Initialise system trusted keyrings Sep 10 00:39:40.013804 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 00:39:40.013815 kernel: Key type asymmetric registered Sep 10 00:39:40.013825 kernel: Asymmetric key parser 'x509' registered Sep 10 00:39:40.013836 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 10 00:39:40.013847 kernel: io scheduler mq-deadline registered Sep 10 00:39:40.013864 kernel: io scheduler kyber registered Sep 10 00:39:40.013875 kernel: io scheduler bfq registered Sep 10 00:39:40.013885 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 10 00:39:40.013897 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 10 00:39:40.013909 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 10 00:39:40.013919 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 10 00:39:40.013939 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 00:39:40.013950 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 10 00:39:40.013961 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 10 00:39:40.013977 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 10 00:39:40.013987 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 10 00:39:40.014178 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 10 00:39:40.014248 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 10 00:39:40.014416 kernel: rtc_cmos 00:04: registered as rtc0 Sep 10 00:39:40.014572 kernel: rtc_cmos 00:04: setting system clock to 2025-09-10T00:39:39 UTC (1757464779) Sep 10 00:39:40.014726 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 10 00:39:40.014742 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 10 00:39:40.014759 kernel: efifb: probing for efifb Sep 10 00:39:40.014770 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 10 00:39:40.014781 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 10 00:39:40.014792 kernel: efifb: scrolling: redraw Sep 10 00:39:40.014803 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 10 00:39:40.014814 kernel: Console: switching to colour frame buffer device 100x37 Sep 10 00:39:40.014848 kernel: fb0: EFI VGA frame buffer device Sep 10 00:39:40.014863 kernel: pstore: Using crash dump compression: deflate Sep 10 00:39:40.014874 kernel: pstore: Registered efi_pstore as persistent store backend Sep 10 00:39:40.014888 kernel: NET: Registered PF_INET6 protocol family Sep 10 00:39:40.014899 kernel: Segment Routing with IPv6 Sep 10 00:39:40.014910 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 00:39:40.014921 kernel: NET: Registered PF_PACKET protocol family Sep 10 00:39:40.014943 kernel: Key type dns_resolver registered Sep 10 00:39:40.014954 kernel: IPI shorthand broadcast: enabled Sep 10 00:39:40.014965 kernel: sched_clock: Marking stable (1107003950, 132425863)->(1412972749, -173542936) Sep 10 00:39:40.014976 kernel: registered taskstats version 1 Sep 10 00:39:40.014987 kernel: Loading compiled-in X.509 certificates Sep 10 00:39:40.015004 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: a614f1c62f27a560d677bbf0283703118c9005ec' Sep 10 00:39:40.015015 kernel: Key type .fscrypt registered Sep 10 00:39:40.015027 kernel: Key type fscrypt-provisioning registered Sep 10 00:39:40.015039 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 00:39:40.015050 kernel: ima: Allocated hash algorithm: sha1 Sep 10 00:39:40.015062 kernel: ima: No architecture policies found Sep 10 00:39:40.015073 kernel: clk: Disabling unused clocks Sep 10 00:39:40.015114 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 10 00:39:40.015150 kernel: Write protecting the kernel read-only data: 36864k Sep 10 00:39:40.015162 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 10 00:39:40.015174 kernel: Run /init as init process Sep 10 00:39:40.015185 kernel: with arguments: Sep 10 00:39:40.015221 kernel: /init Sep 10 00:39:40.015234 kernel: with environment: Sep 10 00:39:40.015248 kernel: HOME=/ Sep 10 00:39:40.015270 kernel: TERM=linux Sep 10 00:39:40.015284 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 00:39:40.015308 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:39:40.015326 systemd[1]: Detected virtualization kvm. Sep 10 00:39:40.015342 systemd[1]: Detected architecture x86-64. Sep 10 00:39:40.015356 systemd[1]: Running in initrd. Sep 10 00:39:40.015379 systemd[1]: No hostname configured, using default hostname. Sep 10 00:39:40.015394 systemd[1]: Hostname set to . Sep 10 00:39:40.015410 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:39:40.015425 systemd[1]: Queued start job for default target initrd.target. Sep 10 00:39:40.015441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:39:40.015456 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:39:40.015469 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 00:39:40.015482 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:39:40.015498 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 00:39:40.015511 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 00:39:40.015526 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 00:39:40.015539 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 00:39:40.015552 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:39:40.015564 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:39:40.015576 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:39:40.015592 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:39:40.015604 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:39:40.015616 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:39:40.015629 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:39:40.015641 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:39:40.015653 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 00:39:40.015666 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 10 00:39:40.015678 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:39:40.015690 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:39:40.015705 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:39:40.015717 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:39:40.015729 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 00:39:40.015741 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:39:40.015753 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 00:39:40.015765 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 00:39:40.015776 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:39:40.015788 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:39:40.015804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:39:40.015816 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 00:39:40.015828 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:39:40.015839 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 00:39:40.015882 systemd-journald[192]: Collecting audit messages is disabled. Sep 10 00:39:40.015916 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:39:40.015939 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:39:40.015951 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:39:40.015963 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:39:40.015980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:39:40.015992 systemd-journald[192]: Journal started Sep 10 00:39:40.016016 systemd-journald[192]: Runtime Journal (/run/log/journal/ecfcfc6b47404e41a13c95d169caca35) is 6.0M, max 48.3M, 42.2M free. Sep 10 00:39:40.013762 systemd-modules-load[194]: Inserted module 'overlay' Sep 10 00:39:40.021228 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:39:40.025422 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:39:40.034403 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:39:40.038125 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:39:40.047211 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 00:39:40.050024 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 10 00:39:40.051336 kernel: Bridge firewalling registered Sep 10 00:39:40.053468 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 00:39:40.056005 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:39:40.058914 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:39:40.064607 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:39:40.074116 dracut-cmdline[219]: dracut-dracut-053 Sep 10 00:39:40.077849 dracut-cmdline[219]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=a3dbdfb50e14c8de85dda26f853cdd6055239b4b8b15c08fb0eb00b67ce87a58 Sep 10 00:39:40.079767 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:39:40.091340 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:39:40.127468 systemd-resolved[241]: Positive Trust Anchors: Sep 10 00:39:40.127519 systemd-resolved[241]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:39:40.127586 systemd-resolved[241]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:39:40.132748 systemd-resolved[241]: Defaulting to hostname 'linux'. Sep 10 00:39:40.136006 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:39:40.145386 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:39:40.194250 kernel: SCSI subsystem initialized Sep 10 00:39:40.206366 kernel: Loading iSCSI transport class v2.0-870. Sep 10 00:39:40.222248 kernel: iscsi: registered transport (tcp) Sep 10 00:39:40.249247 kernel: iscsi: registered transport (qla4xxx) Sep 10 00:39:40.249333 kernel: QLogic iSCSI HBA Driver Sep 10 00:39:40.313928 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 00:39:40.342519 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 00:39:40.370234 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 00:39:40.370319 kernel: device-mapper: uevent: version 1.0.3 Sep 10 00:39:40.372221 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 10 00:39:40.422233 kernel: raid6: avx2x4 gen() 26884 MB/s Sep 10 00:39:40.439228 kernel: raid6: avx2x2 gen() 29718 MB/s Sep 10 00:39:40.456484 kernel: raid6: avx2x1 gen() 23466 MB/s Sep 10 00:39:40.456538 kernel: raid6: using algorithm avx2x2 gen() 29718 MB/s Sep 10 00:39:40.474457 kernel: raid6: .... xor() 17234 MB/s, rmw enabled Sep 10 00:39:40.474557 kernel: raid6: using avx2x2 recovery algorithm Sep 10 00:39:40.497239 kernel: xor: automatically using best checksumming function avx Sep 10 00:39:40.716238 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 00:39:40.737559 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:39:40.762400 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:39:40.775157 systemd-udevd[414]: Using default interface naming scheme 'v255'. Sep 10 00:39:40.780317 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:39:40.787353 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 00:39:40.804102 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Sep 10 00:39:40.861257 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:39:40.872360 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:39:40.949280 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:39:40.956402 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 00:39:40.973977 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 00:39:40.978648 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:39:40.982347 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:39:40.983997 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:39:40.994369 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 00:39:41.007326 kernel: cryptd: max_cpu_qlen set to 1000 Sep 10 00:39:41.017087 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:39:41.028220 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 10 00:39:41.034624 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 00:39:41.034830 kernel: AVX2 version of gcm_enc/dec engaged. Sep 10 00:39:41.037239 kernel: AES CTR mode by8 optimization enabled Sep 10 00:39:41.042903 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 00:39:41.042947 kernel: GPT:9289727 != 19775487 Sep 10 00:39:41.042961 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 00:39:41.042986 kernel: GPT:9289727 != 19775487 Sep 10 00:39:41.042999 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 00:39:41.043012 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:39:41.046903 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:39:41.051115 kernel: libata version 3.00 loaded. Sep 10 00:39:41.048336 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:39:41.049924 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:39:41.053171 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:39:41.053526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:39:41.059622 kernel: ahci 0000:00:1f.2: version 3.0 Sep 10 00:39:41.059824 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 10 00:39:41.056634 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:39:41.064716 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 10 00:39:41.064892 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 10 00:39:41.065053 kernel: scsi host0: ahci Sep 10 00:39:41.065241 kernel: scsi host1: ahci Sep 10 00:39:41.068208 kernel: scsi host2: ahci Sep 10 00:39:41.068504 kernel: scsi host3: ahci Sep 10 00:39:41.068367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:39:41.073756 kernel: scsi host4: ahci Sep 10 00:39:41.073953 kernel: scsi host5: ahci Sep 10 00:39:41.077207 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 10 00:39:41.077232 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 10 00:39:41.077252 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 10 00:39:41.077268 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 10 00:39:41.079326 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 10 00:39:41.079348 kernel: BTRFS: device fsid 47ffa5df-7ab2-4f1a-b68f-595717991426 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (470) Sep 10 00:39:41.079360 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 10 00:39:41.086269 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (477) Sep 10 00:39:41.103538 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 00:39:41.120063 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 00:39:41.129122 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 00:39:41.131740 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 00:39:41.141924 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:39:41.152537 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 00:39:41.153816 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:39:41.153917 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:39:41.156390 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:39:41.160180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:39:41.164357 disk-uuid[556]: Primary Header is updated. Sep 10 00:39:41.164357 disk-uuid[556]: Secondary Entries is updated. Sep 10 00:39:41.164357 disk-uuid[556]: Secondary Header is updated. Sep 10 00:39:41.168269 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:39:41.173225 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:39:41.185965 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:39:41.197605 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 00:39:41.230059 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:39:41.395972 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 10 00:39:41.396058 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 10 00:39:41.396070 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 10 00:39:41.396209 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 10 00:39:41.397242 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 10 00:39:41.398265 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 10 00:39:41.399236 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 10 00:39:41.399287 kernel: ata3.00: applying bridge limits Sep 10 00:39:41.400731 kernel: ata3.00: configured for UDMA/100 Sep 10 00:39:41.401239 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 10 00:39:41.450285 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 10 00:39:41.450739 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 10 00:39:41.464234 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 10 00:39:42.176271 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 00:39:42.176597 disk-uuid[558]: The operation has completed successfully. Sep 10 00:39:42.210776 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 00:39:42.210984 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 00:39:42.245359 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 00:39:42.249414 sh[599]: Success Sep 10 00:39:42.264225 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 10 00:39:42.306161 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 00:39:42.320422 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 00:39:42.323610 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 00:39:42.338311 kernel: BTRFS info (device dm-0): first mount of filesystem 47ffa5df-7ab2-4f1a-b68f-595717991426 Sep 10 00:39:42.338343 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:39:42.338354 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 10 00:39:42.339421 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 00:39:42.340791 kernel: BTRFS info (device dm-0): using free space tree Sep 10 00:39:42.346826 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 00:39:42.350006 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 00:39:42.363351 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 00:39:42.382341 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 00:39:42.389581 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:39:42.389617 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:39:42.389633 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:39:42.393612 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:39:42.405589 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 10 00:39:42.407407 kernel: BTRFS info (device vda6): last unmount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:39:42.418637 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 00:39:42.427392 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 00:39:42.494511 ignition[691]: Ignition 2.19.0 Sep 10 00:39:42.495579 ignition[691]: Stage: fetch-offline Sep 10 00:39:42.495643 ignition[691]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:39:42.495659 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:39:42.495785 ignition[691]: parsed url from cmdline: "" Sep 10 00:39:42.495790 ignition[691]: no config URL provided Sep 10 00:39:42.495798 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 00:39:42.495811 ignition[691]: no config at "/usr/lib/ignition/user.ign" Sep 10 00:39:42.495849 ignition[691]: op(1): [started] loading QEMU firmware config module Sep 10 00:39:42.495868 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 00:39:42.503694 ignition[691]: op(1): [finished] loading QEMU firmware config module Sep 10 00:39:42.528544 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:39:42.539519 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:39:42.547928 ignition[691]: parsing config with SHA512: 112ea49fa065133942599163604ec3a97dbe8b46fbe964512c7c4a1e3b693b105b85bbcdf2aad3daeb5db1e908de49d60ccbaf811b12eff12474f49a838ad88d Sep 10 00:39:42.557169 unknown[691]: fetched base config from "system" Sep 10 00:39:42.557453 unknown[691]: fetched user config from "qemu" Sep 10 00:39:42.557903 ignition[691]: fetch-offline: fetch-offline passed Sep 10 00:39:42.557997 ignition[691]: Ignition finished successfully Sep 10 00:39:42.560246 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:39:42.573304 systemd-networkd[787]: lo: Link UP Sep 10 00:39:42.573315 systemd-networkd[787]: lo: Gained carrier Sep 10 00:39:42.576609 systemd-networkd[787]: Enumeration completed Sep 10 00:39:42.576862 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:39:42.579654 systemd[1]: Reached target network.target - Network. Sep 10 00:39:42.579751 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 00:39:42.583393 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:39:42.583400 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:39:42.587821 systemd-networkd[787]: eth0: Link UP Sep 10 00:39:42.587833 systemd-networkd[787]: eth0: Gained carrier Sep 10 00:39:42.587843 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:39:42.587953 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 00:39:42.602303 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:39:42.607883 ignition[790]: Ignition 2.19.0 Sep 10 00:39:42.607893 ignition[790]: Stage: kargs Sep 10 00:39:42.608078 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:39:42.608090 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:39:42.611858 ignition[790]: kargs: kargs passed Sep 10 00:39:42.611909 ignition[790]: Ignition finished successfully Sep 10 00:39:42.616280 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 00:39:42.631674 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 00:39:42.649269 ignition[799]: Ignition 2.19.0 Sep 10 00:39:42.649283 ignition[799]: Stage: disks Sep 10 00:39:42.649471 ignition[799]: no configs at "/usr/lib/ignition/base.d" Sep 10 00:39:42.649483 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:39:42.650324 ignition[799]: disks: disks passed Sep 10 00:39:42.650375 ignition[799]: Ignition finished successfully Sep 10 00:39:42.656413 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 00:39:42.657731 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 00:39:42.659578 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 00:39:42.659951 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:39:42.660483 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:39:42.660798 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:39:42.682482 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 00:39:42.697311 systemd-resolved[241]: Detected conflict on linux IN A 10.0.0.90 Sep 10 00:39:42.697339 systemd-resolved[241]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Sep 10 00:39:42.700654 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 10 00:39:42.716702 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 00:39:42.729428 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 00:39:42.831235 kernel: EXT4-fs (vda9): mounted filesystem 0a9bf3c7-f8cd-4d40-b949-283957ba2f96 r/w with ordered data mode. Quota mode: none. Sep 10 00:39:42.832059 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 00:39:42.834183 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 00:39:42.847311 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:39:42.850134 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 00:39:42.851736 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 00:39:42.851779 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 00:39:42.864699 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (817) Sep 10 00:39:42.864744 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:39:42.864760 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:39:42.864775 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:39:42.851806 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:39:42.859811 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 00:39:42.870015 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:39:42.865798 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 00:39:42.872664 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:39:42.953041 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 00:39:42.957593 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Sep 10 00:39:42.962019 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 00:39:42.965829 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 00:39:43.065801 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 00:39:43.081379 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 00:39:43.083412 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 00:39:43.096272 kernel: BTRFS info (device vda6): last unmount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:39:43.109700 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 00:39:43.165898 ignition[933]: INFO : Ignition 2.19.0 Sep 10 00:39:43.165898 ignition[933]: INFO : Stage: mount Sep 10 00:39:43.168643 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:39:43.168643 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:39:43.168643 ignition[933]: INFO : mount: mount passed Sep 10 00:39:43.168643 ignition[933]: INFO : Ignition finished successfully Sep 10 00:39:43.169147 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 00:39:43.180324 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 00:39:43.337323 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 00:39:43.353451 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 00:39:43.362247 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Sep 10 00:39:43.364643 kernel: BTRFS info (device vda6): first mount of filesystem 81146077-6e72-4c2f-a205-63f64096a038 Sep 10 00:39:43.364673 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 10 00:39:43.364687 kernel: BTRFS info (device vda6): using free space tree Sep 10 00:39:43.369230 kernel: BTRFS info (device vda6): auto enabling async discard Sep 10 00:39:43.370711 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 00:39:43.405468 ignition[961]: INFO : Ignition 2.19.0 Sep 10 00:39:43.405468 ignition[961]: INFO : Stage: files Sep 10 00:39:43.407579 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:39:43.407579 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:39:43.407579 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Sep 10 00:39:43.410986 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 00:39:43.410986 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 00:39:43.416801 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 00:39:43.428377 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 00:39:43.430141 unknown[961]: wrote ssh authorized keys file for user: core Sep 10 00:39:43.431447 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 00:39:43.432768 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 10 00:39:43.432768 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 10 00:39:43.481846 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 00:39:43.680252 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 10 00:39:43.680252 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 00:39:43.685260 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 00:39:43.685260 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:39:43.685260 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 00:39:43.685260 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:39:43.692289 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 00:39:43.692289 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:39:43.692289 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 00:39:43.692289 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:39:43.692289 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 00:39:43.692289 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 10 00:39:43.692289 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 10 00:39:43.692289 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 10 00:39:43.692289 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 10 00:39:44.033517 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 00:39:44.603620 systemd-networkd[787]: eth0: Gained IPv6LL Sep 10 00:39:45.705410 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 10 00:39:45.705410 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 00:39:45.710039 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:39:45.710039 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 00:39:45.710039 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 00:39:45.710039 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 00:39:45.710039 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:39:45.710039 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 00:39:45.710039 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 00:39:45.710039 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 00:39:45.747443 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:39:45.753484 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 00:39:45.755168 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 00:39:45.755168 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 00:39:45.755168 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 00:39:45.755168 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:39:45.755168 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 00:39:45.755168 ignition[961]: INFO : files: files passed Sep 10 00:39:45.755168 ignition[961]: INFO : Ignition finished successfully Sep 10 00:39:45.766459 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 00:39:45.779377 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 00:39:45.782058 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 00:39:45.786473 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 00:39:45.786602 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 00:39:45.806956 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 00:39:45.812507 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:39:45.812507 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:39:45.815685 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 00:39:45.817578 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:39:45.820587 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 00:39:45.837479 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 00:39:45.871952 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 00:39:45.873238 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 00:39:45.876413 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 00:39:45.878825 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 00:39:45.881274 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 00:39:45.884112 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 00:39:45.911896 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:39:45.926032 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 00:39:45.942297 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:39:45.945497 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:39:45.948506 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 00:39:45.950902 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 00:39:45.952258 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 00:39:45.955629 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 00:39:45.958336 systemd[1]: Stopped target basic.target - Basic System. Sep 10 00:39:45.960707 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 00:39:45.963542 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 00:39:45.966440 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 00:39:45.969250 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 00:39:45.971922 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 00:39:45.975099 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 00:39:45.977704 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 00:39:45.980329 systemd[1]: Stopped target swap.target - Swaps. Sep 10 00:39:45.982393 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 00:39:45.983745 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 00:39:45.986681 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:39:45.989321 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:39:45.992051 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 00:39:45.993136 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:39:45.995793 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 00:39:45.996840 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 00:39:45.999262 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 00:39:46.000347 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 00:39:46.002836 systemd[1]: Stopped target paths.target - Path Units. Sep 10 00:39:46.004688 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 00:39:46.010329 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:39:46.010548 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 00:39:46.013996 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 00:39:46.014906 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 00:39:46.015019 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 00:39:46.016580 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 00:39:46.016678 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 00:39:46.018335 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 00:39:46.018456 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 00:39:46.021700 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 00:39:46.021818 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 00:39:46.033346 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 00:39:46.035847 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 00:39:46.036915 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 00:39:46.037039 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:39:46.039862 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 00:39:46.039976 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 00:39:46.046985 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 00:39:46.047109 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 00:39:46.050791 ignition[1014]: INFO : Ignition 2.19.0 Sep 10 00:39:46.050791 ignition[1014]: INFO : Stage: umount Sep 10 00:39:46.050791 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 00:39:46.050791 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 00:39:46.055904 ignition[1014]: INFO : umount: umount passed Sep 10 00:39:46.055904 ignition[1014]: INFO : Ignition finished successfully Sep 10 00:39:46.053885 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 00:39:46.054011 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 00:39:46.056106 systemd[1]: Stopped target network.target - Network. Sep 10 00:39:46.057627 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 00:39:46.057686 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 00:39:46.059578 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 00:39:46.059648 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 00:39:46.061512 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 00:39:46.061576 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 00:39:46.063448 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 00:39:46.063502 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 00:39:46.065509 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 00:39:46.067600 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 00:39:46.070916 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 00:39:46.072270 systemd-networkd[787]: eth0: DHCPv6 lease lost Sep 10 00:39:46.075885 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 00:39:46.076073 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 00:39:46.079406 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 00:39:46.081342 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 00:39:46.090257 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 00:39:46.090327 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:39:46.098339 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 00:39:46.102313 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 00:39:46.102650 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 00:39:46.107909 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:39:46.108073 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:39:46.109694 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 00:39:46.109835 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 00:39:46.113915 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 00:39:46.113992 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:39:46.116883 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:39:46.141831 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 00:39:46.143669 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 00:39:46.147798 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 00:39:46.148202 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:39:46.153699 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 00:39:46.153876 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 00:39:46.155109 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 00:39:46.155213 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:39:46.158656 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 00:39:46.158866 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 00:39:46.164815 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 00:39:46.164904 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 00:39:46.168085 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 00:39:46.169366 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 00:39:46.186769 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 00:39:46.208166 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 00:39:46.208327 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:39:46.210645 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 00:39:46.210718 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:39:46.212962 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 00:39:46.213042 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:39:46.218510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:39:46.219527 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:39:46.224648 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 00:39:46.225980 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 00:39:46.654301 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 00:39:46.655435 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 00:39:46.658372 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 00:39:46.660530 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 00:39:46.660602 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 00:39:46.674369 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 00:39:46.684948 systemd[1]: Switching root. Sep 10 00:39:46.718051 systemd-journald[192]: Journal stopped Sep 10 00:39:48.169624 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Sep 10 00:39:48.169719 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 00:39:48.169749 kernel: SELinux: policy capability open_perms=1 Sep 10 00:39:48.169765 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 00:39:48.169778 kernel: SELinux: policy capability always_check_network=0 Sep 10 00:39:48.169794 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 00:39:48.169806 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 00:39:48.169818 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 00:39:48.169829 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 00:39:48.169840 kernel: audit: type=1403 audit(1757464787.123:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 00:39:48.169862 systemd[1]: Successfully loaded SELinux policy in 43.255ms. Sep 10 00:39:48.169901 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.532ms. Sep 10 00:39:48.169920 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 10 00:39:48.169933 systemd[1]: Detected virtualization kvm. Sep 10 00:39:48.169947 systemd[1]: Detected architecture x86-64. Sep 10 00:39:48.169959 systemd[1]: Detected first boot. Sep 10 00:39:48.169974 systemd[1]: Initializing machine ID from VM UUID. Sep 10 00:39:48.169986 zram_generator::config[1059]: No configuration found. Sep 10 00:39:48.170008 systemd[1]: Populated /etc with preset unit settings. Sep 10 00:39:48.170020 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 00:39:48.170038 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 00:39:48.170051 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 00:39:48.170063 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 00:39:48.170076 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 00:39:48.170089 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 00:39:48.170101 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 00:39:48.170116 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 00:39:48.170128 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 00:39:48.170140 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 00:39:48.170158 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 00:39:48.170172 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 00:39:48.170186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 00:39:48.170222 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 00:39:48.170234 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 00:39:48.170248 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 00:39:48.170269 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 00:39:48.170281 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 10 00:39:48.170293 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 00:39:48.170312 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 00:39:48.170325 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 00:39:48.170337 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 00:39:48.170349 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 00:39:48.170361 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 00:39:48.170374 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 00:39:48.170386 systemd[1]: Reached target slices.target - Slice Units. Sep 10 00:39:48.170401 systemd[1]: Reached target swap.target - Swaps. Sep 10 00:39:48.170418 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 00:39:48.170430 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 00:39:48.170442 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 00:39:48.170455 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 00:39:48.170469 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 00:39:48.170482 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 00:39:48.170494 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 00:39:48.170506 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 00:39:48.170518 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 00:39:48.170536 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:39:48.170555 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 00:39:48.170567 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 00:39:48.170579 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 00:39:48.170592 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 00:39:48.170605 systemd[1]: Reached target machines.target - Containers. Sep 10 00:39:48.170617 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 00:39:48.170629 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:39:48.170647 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 00:39:48.170659 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 00:39:48.170671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:39:48.170684 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:39:48.170704 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:39:48.170717 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 00:39:48.170729 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:39:48.170741 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 00:39:48.170760 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 00:39:48.170773 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 00:39:48.170785 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 00:39:48.170797 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 00:39:48.170809 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 00:39:48.170821 kernel: loop: module loaded Sep 10 00:39:48.170840 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 00:39:48.170854 kernel: fuse: init (API version 7.39) Sep 10 00:39:48.170889 systemd-journald[1122]: Collecting audit messages is disabled. Sep 10 00:39:48.170924 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 00:39:48.170936 systemd-journald[1122]: Journal started Sep 10 00:39:48.170959 systemd-journald[1122]: Runtime Journal (/run/log/journal/ecfcfc6b47404e41a13c95d169caca35) is 6.0M, max 48.3M, 42.2M free. Sep 10 00:39:47.853360 systemd[1]: Queued start job for default target multi-user.target. Sep 10 00:39:47.871613 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 00:39:47.872167 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 00:39:48.183259 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 00:39:48.190156 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 00:39:48.217726 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 00:39:48.217746 systemd[1]: Stopped verity-setup.service. Sep 10 00:39:48.217762 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:39:48.217779 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 00:39:48.220424 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 00:39:48.221740 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 00:39:48.222966 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 00:39:48.224073 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 00:39:48.225565 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 00:39:48.227584 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 00:39:48.228894 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 00:39:48.230529 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 00:39:48.230761 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 00:39:48.247061 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:39:48.247264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:39:48.249218 kernel: ACPI: bus type drm_connector registered Sep 10 00:39:48.249384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:39:48.249569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:39:48.251449 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:39:48.251632 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:39:48.253064 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 00:39:48.253287 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 00:39:48.254764 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:39:48.254941 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:39:48.256414 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 00:39:48.258040 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 00:39:48.264643 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 00:39:48.279841 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 00:39:48.300943 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 00:39:48.322272 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 00:39:48.326419 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 00:39:48.326478 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 00:39:48.329071 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 10 00:39:48.334421 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 00:39:48.356180 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 00:39:48.358552 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:39:48.387379 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 00:39:48.389940 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 00:39:48.391211 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:39:48.398373 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 00:39:48.400448 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:39:48.402476 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:39:48.413441 systemd-journald[1122]: Time spent on flushing to /var/log/journal/ecfcfc6b47404e41a13c95d169caca35 is 20.113ms for 995 entries. Sep 10 00:39:48.413441 systemd-journald[1122]: System Journal (/var/log/journal/ecfcfc6b47404e41a13c95d169caca35) is 8.0M, max 195.6M, 187.6M free. Sep 10 00:39:48.618910 systemd-journald[1122]: Received client request to flush runtime journal. Sep 10 00:39:48.619008 kernel: loop0: detected capacity change from 0 to 142488 Sep 10 00:39:48.439290 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 00:39:48.464871 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 00:39:48.776121 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 00:39:48.778149 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 00:39:48.779990 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 00:39:48.781434 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 00:39:48.783007 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 00:39:48.784649 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 00:39:48.787081 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 00:39:48.812010 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 00:39:48.913353 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 00:39:48.916395 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 10 00:39:48.931551 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 10 00:39:48.948385 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 10 00:39:49.078235 kernel: loop1: detected capacity change from 0 to 140768 Sep 10 00:39:49.086962 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Sep 10 00:39:49.086984 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Sep 10 00:39:49.088569 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:39:49.100335 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 00:39:49.115507 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 00:39:49.136451 kernel: loop2: detected capacity change from 0 to 229808 Sep 10 00:39:49.195437 kernel: loop3: detected capacity change from 0 to 142488 Sep 10 00:39:49.197798 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 00:39:49.235831 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 00:39:49.258295 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Sep 10 00:39:49.258330 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Sep 10 00:39:49.265921 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 00:39:49.351247 kernel: loop4: detected capacity change from 0 to 140768 Sep 10 00:39:49.364369 kernel: loop5: detected capacity change from 0 to 229808 Sep 10 00:39:49.371821 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 00:39:49.372632 (sd-merge)[1195]: Merged extensions into '/usr'. Sep 10 00:39:49.377450 systemd[1]: Reloading requested from client PID 1165 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 00:39:49.377472 systemd[1]: Reloading... Sep 10 00:39:49.450235 zram_generator::config[1226]: No configuration found. Sep 10 00:39:49.679209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:39:49.736491 systemd[1]: Reloading finished in 358 ms. Sep 10 00:39:49.779990 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 00:39:49.781921 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 00:39:49.867047 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 10 00:39:49.881013 ldconfig[1160]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 00:39:49.918118 systemd[1]: Starting ensure-sysext.service... Sep 10 00:39:49.924386 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 00:39:50.038875 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 00:39:50.039305 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 00:39:50.040396 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 00:39:50.040723 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Sep 10 00:39:50.040809 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Sep 10 00:39:50.044478 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:39:50.044498 systemd-tmpfiles[1264]: Skipping /boot Sep 10 00:39:50.044865 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Sep 10 00:39:50.044888 systemd[1]: Reloading... Sep 10 00:39:50.060252 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 00:39:50.060277 systemd-tmpfiles[1264]: Skipping /boot Sep 10 00:39:50.128476 zram_generator::config[1289]: No configuration found. Sep 10 00:39:50.370120 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:39:50.446414 systemd[1]: Reloading finished in 401 ms. Sep 10 00:39:50.471985 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 00:39:50.473699 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 00:39:50.487570 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 00:39:50.516804 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:39:50.521028 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 00:39:50.524891 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 00:39:50.534330 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 00:39:50.547790 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 00:39:50.552699 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 00:39:50.557981 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:39:50.558231 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:39:50.567702 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:39:50.574656 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:39:50.578384 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:39:50.579667 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:39:50.587226 augenrules[1356]: No rules Sep 10 00:39:50.591780 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 00:39:50.593284 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:39:50.594779 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:39:50.597085 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 00:39:50.599561 systemd-udevd[1343]: Using default interface naming scheme 'v255'. Sep 10 00:39:50.600092 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:39:50.600753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:39:50.603093 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:39:50.603397 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:39:50.606328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:39:50.606558 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:39:50.617854 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:39:50.618368 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:39:50.627762 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 00:39:50.635401 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 00:39:50.640709 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:39:50.641378 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:39:50.644309 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:39:50.649719 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 00:39:50.654647 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:39:50.655960 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:39:50.656122 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:39:50.657119 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 00:39:50.659139 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 00:39:50.661064 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 00:39:50.663117 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 00:39:50.669515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:39:50.669767 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:39:50.676804 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:39:50.677917 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:39:50.699290 systemd[1]: Finished ensure-sysext.service. Sep 10 00:39:50.700868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 00:39:50.701778 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 00:39:50.718968 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:39:50.719206 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 00:39:50.730279 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 00:39:50.737430 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 00:39:50.743562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 00:39:50.744903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 00:39:50.746710 systemd-resolved[1338]: Positive Trust Anchors: Sep 10 00:39:50.746741 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 00:39:50.746775 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 00:39:50.750404 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 00:39:50.751889 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 00:39:50.751992 systemd-resolved[1338]: Defaulting to hostname 'linux'. Sep 10 00:39:50.756522 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 00:39:50.757885 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 00:39:50.757931 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 10 00:39:50.758331 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 00:39:50.761274 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 00:39:50.761475 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 00:39:50.763061 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 00:39:50.763273 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 00:39:50.764803 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 00:39:50.765030 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 00:39:50.775018 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 00:39:50.776659 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 00:39:50.810214 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1396) Sep 10 00:39:50.857928 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 10 00:39:50.884436 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 10 00:39:50.896123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 00:39:50.920485 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 00:39:50.925214 kernel: ACPI: button: Power Button [PWRF] Sep 10 00:39:50.931935 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 00:39:50.936239 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 00:39:50.940386 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 10 00:39:50.937538 systemd-networkd[1405]: lo: Link UP Sep 10 00:39:50.937549 systemd-networkd[1405]: lo: Gained carrier Sep 10 00:39:50.957913 systemd-networkd[1405]: Enumeration completed Sep 10 00:39:50.958067 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 00:39:50.960373 systemd[1]: Reached target network.target - Network. Sep 10 00:39:50.961822 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:39:50.961834 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 00:39:50.962695 systemd-networkd[1405]: eth0: Link UP Sep 10 00:39:50.962704 systemd-networkd[1405]: eth0: Gained carrier Sep 10 00:39:50.962716 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 00:39:50.970441 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 00:39:50.980254 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.90/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 00:39:50.983370 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 00:39:50.983886 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Sep 10 00:39:50.985350 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 00:39:50.985409 systemd-timesyncd[1406]: Initial clock synchronization to Wed 2025-09-10 00:39:51.316286 UTC. Sep 10 00:39:51.046787 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:39:51.090868 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 00:39:51.091377 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:39:51.113228 kernel: mousedev: PS/2 mouse device common for all mice Sep 10 00:39:51.115361 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 10 00:39:51.115687 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 10 00:39:51.116271 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 10 00:39:51.118185 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 10 00:39:51.118826 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 00:39:51.136800 kernel: kvm_amd: TSC scaling supported Sep 10 00:39:51.136918 kernel: kvm_amd: Nested Virtualization enabled Sep 10 00:39:51.136942 kernel: kvm_amd: Nested Paging enabled Sep 10 00:39:51.136961 kernel: kvm_amd: LBR virtualization supported Sep 10 00:39:51.137463 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 10 00:39:51.138495 kernel: kvm_amd: Virtual GIF supported Sep 10 00:39:51.195294 kernel: EDAC MC: Ver: 3.0.0 Sep 10 00:39:51.226358 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 10 00:39:51.247795 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 10 00:39:51.250061 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 00:39:51.261057 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:39:51.314930 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 10 00:39:51.316688 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 00:39:51.317918 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 00:39:51.319248 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 00:39:51.343310 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 00:39:51.344942 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 00:39:51.346301 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 00:39:51.347677 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 00:39:51.348984 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 00:39:51.349032 systemd[1]: Reached target paths.target - Path Units. Sep 10 00:39:51.350047 systemd[1]: Reached target timers.target - Timer Units. Sep 10 00:39:51.352365 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 00:39:51.355408 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 00:39:51.369361 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 00:39:51.382780 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 10 00:39:51.384523 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 00:39:51.385830 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 00:39:51.386878 systemd[1]: Reached target basic.target - Basic System. Sep 10 00:39:51.387936 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:39:51.387967 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 00:39:51.389262 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 00:39:51.392144 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 00:39:51.397397 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 00:39:51.399299 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 10 00:39:51.424016 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 00:39:51.425404 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 00:39:51.428779 jq[1445]: false Sep 10 00:39:51.429021 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 00:39:51.434369 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 00:39:51.437160 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 00:39:51.439832 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 00:39:51.456525 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 00:39:51.459332 extend-filesystems[1446]: Found loop3 Sep 10 00:39:51.460533 extend-filesystems[1446]: Found loop4 Sep 10 00:39:51.460533 extend-filesystems[1446]: Found loop5 Sep 10 00:39:51.460533 extend-filesystems[1446]: Found sr0 Sep 10 00:39:51.460533 extend-filesystems[1446]: Found vda Sep 10 00:39:51.460533 extend-filesystems[1446]: Found vda1 Sep 10 00:39:51.460533 extend-filesystems[1446]: Found vda2 Sep 10 00:39:51.460533 extend-filesystems[1446]: Found vda3 Sep 10 00:39:51.460533 extend-filesystems[1446]: Found usr Sep 10 00:39:51.460533 extend-filesystems[1446]: Found vda4 Sep 10 00:39:51.460533 extend-filesystems[1446]: Found vda6 Sep 10 00:39:51.460533 extend-filesystems[1446]: Found vda7 Sep 10 00:39:51.460533 extend-filesystems[1446]: Found vda9 Sep 10 00:39:51.460533 extend-filesystems[1446]: Checking size of /dev/vda9 Sep 10 00:39:51.459901 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 00:39:51.472812 dbus-daemon[1444]: [system] SELinux support is enabled Sep 10 00:39:51.460707 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 00:39:51.462843 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 00:39:51.467424 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 00:39:51.490889 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 00:39:51.493350 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 10 00:39:51.499900 jq[1457]: true Sep 10 00:39:51.500517 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 00:39:51.500800 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 00:39:51.505733 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 00:39:51.508256 extend-filesystems[1446]: Resized partition /dev/vda9 Sep 10 00:39:51.506033 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 00:39:51.508888 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 00:39:51.509148 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 00:39:51.516788 update_engine[1456]: I20250910 00:39:51.514617 1456 main.cc:92] Flatcar Update Engine starting Sep 10 00:39:51.516788 update_engine[1456]: I20250910 00:39:51.516503 1456 update_check_scheduler.cc:74] Next update check in 6m6s Sep 10 00:39:51.524694 extend-filesystems[1476]: resize2fs 1.47.1 (20-May-2024) Sep 10 00:39:51.550321 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1381) Sep 10 00:39:51.535070 systemd[1]: Started update-engine.service - Update Engine. Sep 10 00:39:51.550484 jq[1469]: true Sep 10 00:39:51.556621 (ntainerd)[1478]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 00:39:51.640362 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 00:39:51.640427 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 00:39:51.642237 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 00:39:51.642267 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 00:39:51.662731 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 00:39:51.684621 tar[1466]: linux-amd64/LICENSE Sep 10 00:39:51.685046 tar[1466]: linux-amd64/helm Sep 10 00:39:51.692443 systemd-logind[1453]: Watching system buttons on /dev/input/event1 (Power Button) Sep 10 00:39:51.692470 systemd-logind[1453]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 10 00:39:51.693361 systemd-logind[1453]: New seat seat0. Sep 10 00:39:51.694859 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 00:39:51.769092 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 00:39:51.971349 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 00:39:52.000898 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 00:39:52.056842 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 00:39:52.119782 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 00:39:52.120049 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 00:39:52.157787 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 00:39:52.192558 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 00:39:52.202864 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 00:39:52.206986 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 00:39:52.210789 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 10 00:39:52.212295 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 00:39:52.300258 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 00:39:52.332351 extend-filesystems[1476]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 00:39:52.332351 extend-filesystems[1476]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 00:39:52.332351 extend-filesystems[1476]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 00:39:52.343284 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Sep 10 00:39:52.334741 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 00:39:52.335045 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 00:39:52.346094 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Sep 10 00:39:52.347693 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 00:39:52.351811 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 00:39:52.541136 tar[1466]: linux-amd64/README.md Sep 10 00:39:52.562389 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 00:39:52.587095 containerd[1478]: time="2025-09-10T00:39:52.586897949Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 10 00:39:52.611486 systemd-networkd[1405]: eth0: Gained IPv6LL Sep 10 00:39:52.616026 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 00:39:52.618428 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 00:39:52.629589 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 00:39:52.632926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:39:52.636119 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 00:39:52.640726 containerd[1478]: time="2025-09-10T00:39:52.640632657Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:39:52.644289 containerd[1478]: time="2025-09-10T00:39:52.644167964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:39:52.644289 containerd[1478]: time="2025-09-10T00:39:52.644268385Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 10 00:39:52.644475 containerd[1478]: time="2025-09-10T00:39:52.644306931Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 10 00:39:52.644797 containerd[1478]: time="2025-09-10T00:39:52.644665271Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 10 00:39:52.644797 containerd[1478]: time="2025-09-10T00:39:52.644711583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 10 00:39:52.645081 containerd[1478]: time="2025-09-10T00:39:52.644867545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:39:52.645081 containerd[1478]: time="2025-09-10T00:39:52.644897340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:39:52.647473 containerd[1478]: time="2025-09-10T00:39:52.645286481Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:39:52.647473 containerd[1478]: time="2025-09-10T00:39:52.645318405Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 10 00:39:52.647473 containerd[1478]: time="2025-09-10T00:39:52.645333416Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:39:52.647473 containerd[1478]: time="2025-09-10T00:39:52.645343684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 10 00:39:52.647473 containerd[1478]: time="2025-09-10T00:39:52.645498711Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:39:52.647473 containerd[1478]: time="2025-09-10T00:39:52.645893438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 10 00:39:52.647473 containerd[1478]: time="2025-09-10T00:39:52.646055961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 10 00:39:52.647473 containerd[1478]: time="2025-09-10T00:39:52.646077358Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 10 00:39:52.647473 containerd[1478]: time="2025-09-10T00:39:52.646193744Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 10 00:39:52.647473 containerd[1478]: time="2025-09-10T00:39:52.646300021Z" level=info msg="metadata content store policy set" policy=shared Sep 10 00:39:52.669319 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 00:39:52.669637 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 00:39:52.671671 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 00:39:52.713930 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 00:39:52.779039 containerd[1478]: time="2025-09-10T00:39:52.778923048Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 10 00:39:52.779039 containerd[1478]: time="2025-09-10T00:39:52.779049412Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 10 00:39:52.779039 containerd[1478]: time="2025-09-10T00:39:52.779075698Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 10 00:39:52.779039 containerd[1478]: time="2025-09-10T00:39:52.779119851Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 10 00:39:52.779413 containerd[1478]: time="2025-09-10T00:39:52.779177375Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 10 00:39:52.779599 containerd[1478]: time="2025-09-10T00:39:52.779534894Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 10 00:39:52.780029 containerd[1478]: time="2025-09-10T00:39:52.779991931Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 10 00:39:52.780253 containerd[1478]: time="2025-09-10T00:39:52.780176390Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 10 00:39:52.780253 containerd[1478]: time="2025-09-10T00:39:52.780207327Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 10 00:39:52.780253 containerd[1478]: time="2025-09-10T00:39:52.780255197Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 10 00:39:52.780358 containerd[1478]: time="2025-09-10T00:39:52.780278939Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 10 00:39:52.780358 containerd[1478]: time="2025-09-10T00:39:52.780295196Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 10 00:39:52.780358 containerd[1478]: time="2025-09-10T00:39:52.780312014Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 10 00:39:52.780358 containerd[1478]: time="2025-09-10T00:39:52.780333640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 10 00:39:52.780358 containerd[1478]: time="2025-09-10T00:39:52.780351235Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 10 00:39:52.780530 containerd[1478]: time="2025-09-10T00:39:52.780367192Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 10 00:39:52.780530 containerd[1478]: time="2025-09-10T00:39:52.780386876Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 10 00:39:52.780530 containerd[1478]: time="2025-09-10T00:39:52.780404701Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 10 00:39:52.780530 containerd[1478]: time="2025-09-10T00:39:52.780466689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780530 containerd[1478]: time="2025-09-10T00:39:52.780490203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780530 containerd[1478]: time="2025-09-10T00:39:52.780510613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780530 containerd[1478]: time="2025-09-10T00:39:52.780528355Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780707 containerd[1478]: time="2025-09-10T00:39:52.780549970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780707 containerd[1478]: time="2025-09-10T00:39:52.780572425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780707 containerd[1478]: time="2025-09-10T00:39:52.780600289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780707 containerd[1478]: time="2025-09-10T00:39:52.780619329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780707 containerd[1478]: time="2025-09-10T00:39:52.780636417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780707 containerd[1478]: time="2025-09-10T00:39:52.780660419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780707 containerd[1478]: time="2025-09-10T00:39:52.780676676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780707 containerd[1478]: time="2025-09-10T00:39:52.780692934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780890 containerd[1478]: time="2025-09-10T00:39:52.780716583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780890 containerd[1478]: time="2025-09-10T00:39:52.780738166Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 10 00:39:52.780890 containerd[1478]: time="2025-09-10T00:39:52.780770079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780890 containerd[1478]: time="2025-09-10T00:39:52.780783139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.780890 containerd[1478]: time="2025-09-10T00:39:52.780794600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 10 00:39:52.780890 containerd[1478]: time="2025-09-10T00:39:52.780869243Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 10 00:39:52.781029 containerd[1478]: time="2025-09-10T00:39:52.780896796Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 10 00:39:52.781029 containerd[1478]: time="2025-09-10T00:39:52.780909400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 10 00:39:52.781029 containerd[1478]: time="2025-09-10T00:39:52.780921836Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 10 00:39:52.781029 containerd[1478]: time="2025-09-10T00:39:52.780935644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.781029 containerd[1478]: time="2025-09-10T00:39:52.780950874Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 10 00:39:52.781029 containerd[1478]: time="2025-09-10T00:39:52.780966705Z" level=info msg="NRI interface is disabled by configuration." Sep 10 00:39:52.781029 containerd[1478]: time="2025-09-10T00:39:52.780977606Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 10 00:39:52.781497 containerd[1478]: time="2025-09-10T00:39:52.781406488Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 10 00:39:52.781497 containerd[1478]: time="2025-09-10T00:39:52.781481193Z" level=info msg="Connect containerd service" Sep 10 00:39:52.781729 containerd[1478]: time="2025-09-10T00:39:52.781526831Z" level=info msg="using legacy CRI server" Sep 10 00:39:52.781729 containerd[1478]: time="2025-09-10T00:39:52.781535437Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 00:39:52.781729 containerd[1478]: time="2025-09-10T00:39:52.781720540Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 10 00:39:52.783342 containerd[1478]: time="2025-09-10T00:39:52.782525618Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:39:52.783342 containerd[1478]: time="2025-09-10T00:39:52.783039099Z" level=info msg="Start subscribing containerd event" Sep 10 00:39:52.783342 containerd[1478]: time="2025-09-10T00:39:52.783340859Z" level=info msg="Start recovering state" Sep 10 00:39:52.783571 containerd[1478]: time="2025-09-10T00:39:52.783408266Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 00:39:52.783571 containerd[1478]: time="2025-09-10T00:39:52.783449813Z" level=info msg="Start event monitor" Sep 10 00:39:52.783571 containerd[1478]: time="2025-09-10T00:39:52.783478196Z" level=info msg="Start snapshots syncer" Sep 10 00:39:52.783571 containerd[1478]: time="2025-09-10T00:39:52.783485245Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 00:39:52.783571 containerd[1478]: time="2025-09-10T00:39:52.783489824Z" level=info msg="Start cni network conf syncer for default" Sep 10 00:39:52.783571 containerd[1478]: time="2025-09-10T00:39:52.783519961Z" level=info msg="Start streaming server" Sep 10 00:39:52.783740 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 00:39:52.786109 containerd[1478]: time="2025-09-10T00:39:52.785687284Z" level=info msg="containerd successfully booted in 0.200891s" Sep 10 00:39:52.975774 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 00:39:52.989613 systemd[1]: Started sshd@0-10.0.0.90:22-10.0.0.1:50062.service - OpenSSH per-connection server daemon (10.0.0.1:50062). Sep 10 00:39:53.042579 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 50062 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:53.062378 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:53.074368 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 00:39:53.135918 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 00:39:53.140902 systemd-logind[1453]: New session 1 of user core. Sep 10 00:39:53.159739 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 00:39:53.191731 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 00:39:53.197848 (systemd)[1555]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 00:39:53.412201 systemd[1555]: Queued start job for default target default.target. Sep 10 00:39:53.425175 systemd[1555]: Created slice app.slice - User Application Slice. Sep 10 00:39:53.425218 systemd[1555]: Reached target paths.target - Paths. Sep 10 00:39:53.425253 systemd[1555]: Reached target timers.target - Timers. Sep 10 00:39:53.427756 systemd[1555]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 00:39:53.447820 systemd[1555]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 00:39:53.448064 systemd[1555]: Reached target sockets.target - Sockets. Sep 10 00:39:53.448095 systemd[1555]: Reached target basic.target - Basic System. Sep 10 00:39:53.448161 systemd[1555]: Reached target default.target - Main User Target. Sep 10 00:39:53.448205 systemd[1555]: Startup finished in 238ms. Sep 10 00:39:53.448775 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 00:39:53.452392 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 00:39:53.537242 systemd[1]: Started sshd@1-10.0.0.90:22-10.0.0.1:50072.service - OpenSSH per-connection server daemon (10.0.0.1:50072). Sep 10 00:39:53.575935 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 50072 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:53.581103 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:53.587293 systemd-logind[1453]: New session 2 of user core. Sep 10 00:39:53.597505 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 00:39:53.690196 sshd[1566]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:53.703472 systemd[1]: sshd@1-10.0.0.90:22-10.0.0.1:50072.service: Deactivated successfully. Sep 10 00:39:53.705544 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 00:39:53.707134 systemd-logind[1453]: Session 2 logged out. Waiting for processes to exit. Sep 10 00:39:53.715041 systemd[1]: Started sshd@2-10.0.0.90:22-10.0.0.1:50082.service - OpenSSH per-connection server daemon (10.0.0.1:50082). Sep 10 00:39:53.718149 systemd-logind[1453]: Removed session 2. Sep 10 00:39:53.752689 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 50082 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:39:53.754728 sshd[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:39:53.759524 systemd-logind[1453]: New session 3 of user core. Sep 10 00:39:53.770590 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 00:39:53.834546 sshd[1573]: pam_unix(sshd:session): session closed for user core Sep 10 00:39:53.839784 systemd[1]: sshd@2-10.0.0.90:22-10.0.0.1:50082.service: Deactivated successfully. Sep 10 00:39:53.842056 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 00:39:53.843027 systemd-logind[1453]: Session 3 logged out. Waiting for processes to exit. Sep 10 00:39:53.844312 systemd-logind[1453]: Removed session 3. Sep 10 00:39:54.336018 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:39:54.340859 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 00:39:54.341077 (kubelet)[1584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:39:54.342931 systemd[1]: Startup finished in 1.260s (kernel) + 7.376s (initrd) + 7.261s (userspace) = 15.897s. Sep 10 00:39:55.117415 kubelet[1584]: E0910 00:39:55.117316 1584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:39:55.122939 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:39:55.123254 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:39:55.123722 systemd[1]: kubelet.service: Consumed 2.127s CPU time. Sep 10 00:40:04.013848 systemd[1]: Started sshd@3-10.0.0.90:22-10.0.0.1:33478.service - OpenSSH per-connection server daemon (10.0.0.1:33478). Sep 10 00:40:04.058562 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 33478 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:40:04.060777 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:40:04.066651 systemd-logind[1453]: New session 4 of user core. Sep 10 00:40:04.081611 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 00:40:04.140935 sshd[1598]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:04.159425 systemd[1]: sshd@3-10.0.0.90:22-10.0.0.1:33478.service: Deactivated successfully. Sep 10 00:40:04.161446 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 00:40:04.163424 systemd-logind[1453]: Session 4 logged out. Waiting for processes to exit. Sep 10 00:40:04.172578 systemd[1]: Started sshd@4-10.0.0.90:22-10.0.0.1:33482.service - OpenSSH per-connection server daemon (10.0.0.1:33482). Sep 10 00:40:04.173970 systemd-logind[1453]: Removed session 4. Sep 10 00:40:04.206504 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 33482 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:40:04.208283 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:40:04.212806 systemd-logind[1453]: New session 5 of user core. Sep 10 00:40:04.224585 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 00:40:04.280873 sshd[1605]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:04.297077 systemd[1]: sshd@4-10.0.0.90:22-10.0.0.1:33482.service: Deactivated successfully. Sep 10 00:40:04.300309 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 00:40:04.302716 systemd-logind[1453]: Session 5 logged out. Waiting for processes to exit. Sep 10 00:40:04.316808 systemd[1]: Started sshd@5-10.0.0.90:22-10.0.0.1:33484.service - OpenSSH per-connection server daemon (10.0.0.1:33484). Sep 10 00:40:04.318075 systemd-logind[1453]: Removed session 5. Sep 10 00:40:04.358104 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 33484 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:40:04.361228 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:40:04.368877 systemd-logind[1453]: New session 6 of user core. Sep 10 00:40:04.379529 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 00:40:04.451124 sshd[1612]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:04.465487 systemd[1]: sshd@5-10.0.0.90:22-10.0.0.1:33484.service: Deactivated successfully. Sep 10 00:40:04.468544 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 00:40:04.473910 systemd-logind[1453]: Session 6 logged out. Waiting for processes to exit. Sep 10 00:40:04.491681 systemd[1]: Started sshd@6-10.0.0.90:22-10.0.0.1:33498.service - OpenSSH per-connection server daemon (10.0.0.1:33498). Sep 10 00:40:04.493805 systemd-logind[1453]: Removed session 6. Sep 10 00:40:04.540358 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 33498 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:40:04.548427 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:40:04.561585 systemd-logind[1453]: New session 7 of user core. Sep 10 00:40:04.576298 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 00:40:04.666053 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 00:40:04.666615 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:40:04.701423 sudo[1622]: pam_unix(sudo:session): session closed for user root Sep 10 00:40:04.704708 sshd[1619]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:04.727054 systemd[1]: sshd@6-10.0.0.90:22-10.0.0.1:33498.service: Deactivated successfully. Sep 10 00:40:04.730654 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:40:04.733130 systemd-logind[1453]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:40:04.746921 systemd[1]: Started sshd@7-10.0.0.90:22-10.0.0.1:33500.service - OpenSSH per-connection server daemon (10.0.0.1:33500). Sep 10 00:40:04.748960 systemd-logind[1453]: Removed session 7. Sep 10 00:40:04.786145 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 33500 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:40:04.789031 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:40:04.797867 systemd-logind[1453]: New session 8 of user core. Sep 10 00:40:04.807447 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 00:40:04.868924 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 00:40:04.869404 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:40:04.875255 sudo[1631]: pam_unix(sudo:session): session closed for user root Sep 10 00:40:04.883698 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 10 00:40:04.884253 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:40:04.906772 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 10 00:40:04.913958 auditctl[1634]: No rules Sep 10 00:40:04.916132 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 00:40:04.916692 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 10 00:40:04.920305 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 10 00:40:04.968826 augenrules[1652]: No rules Sep 10 00:40:04.971488 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 10 00:40:04.973223 sudo[1630]: pam_unix(sudo:session): session closed for user root Sep 10 00:40:04.975861 sshd[1627]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:04.992533 systemd[1]: sshd@7-10.0.0.90:22-10.0.0.1:33500.service: Deactivated successfully. Sep 10 00:40:04.994656 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:40:04.997185 systemd-logind[1453]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:40:05.007795 systemd[1]: Started sshd@8-10.0.0.90:22-10.0.0.1:33506.service - OpenSSH per-connection server daemon (10.0.0.1:33506). Sep 10 00:40:05.009443 systemd-logind[1453]: Removed session 8. Sep 10 00:40:05.044908 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 33506 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:40:05.047057 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:40:05.052474 systemd-logind[1453]: New session 9 of user core. Sep 10 00:40:05.062611 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 00:40:05.123095 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 00:40:05.123699 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 00:40:05.125429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:40:05.145401 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:40:05.540972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:40:05.561628 (kubelet)[1683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:40:05.752316 kubelet[1683]: E0910 00:40:05.752235 1683 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:40:05.789471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:40:05.790502 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:40:06.093685 (dockerd)[1698]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 00:40:06.093695 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 00:40:06.947144 dockerd[1698]: time="2025-09-10T00:40:06.947039089Z" level=info msg="Starting up" Sep 10 00:40:07.813220 dockerd[1698]: time="2025-09-10T00:40:07.813123637Z" level=info msg="Loading containers: start." Sep 10 00:40:08.016277 kernel: Initializing XFRM netlink socket Sep 10 00:40:08.114320 systemd-networkd[1405]: docker0: Link UP Sep 10 00:40:08.166256 dockerd[1698]: time="2025-09-10T00:40:08.166176404Z" level=info msg="Loading containers: done." Sep 10 00:40:08.223168 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1737279512-merged.mount: Deactivated successfully. Sep 10 00:40:08.370061 dockerd[1698]: time="2025-09-10T00:40:08.369789934Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:40:08.370061 dockerd[1698]: time="2025-09-10T00:40:08.370043939Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 10 00:40:08.370299 dockerd[1698]: time="2025-09-10T00:40:08.370254005Z" level=info msg="Daemon has completed initialization" Sep 10 00:40:08.417040 dockerd[1698]: time="2025-09-10T00:40:08.416928903Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:40:08.417236 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 00:40:09.566717 containerd[1478]: time="2025-09-10T00:40:09.566603461Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 10 00:40:10.613502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3379519352.mount: Deactivated successfully. Sep 10 00:40:12.182099 containerd[1478]: time="2025-09-10T00:40:12.182024680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:12.182768 containerd[1478]: time="2025-09-10T00:40:12.182718793Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 10 00:40:12.183952 containerd[1478]: time="2025-09-10T00:40:12.183915875Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:12.187154 containerd[1478]: time="2025-09-10T00:40:12.187102009Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:12.188675 containerd[1478]: time="2025-09-10T00:40:12.188608251Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 2.621903125s" Sep 10 00:40:12.188746 containerd[1478]: time="2025-09-10T00:40:12.188675425Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 10 00:40:12.189565 containerd[1478]: time="2025-09-10T00:40:12.189535102Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 10 00:40:14.088752 containerd[1478]: time="2025-09-10T00:40:14.088669990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:14.091954 containerd[1478]: time="2025-09-10T00:40:14.091883288Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 10 00:40:14.092135 containerd[1478]: time="2025-09-10T00:40:14.092018540Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:14.095778 containerd[1478]: time="2025-09-10T00:40:14.095732734Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:14.096680 containerd[1478]: time="2025-09-10T00:40:14.096629671Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.907058889s" Sep 10 00:40:14.096680 containerd[1478]: time="2025-09-10T00:40:14.096674671Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 10 00:40:14.097290 containerd[1478]: time="2025-09-10T00:40:14.097248454Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 10 00:40:16.032908 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:40:16.049366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:40:16.252499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:40:16.261470 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:40:16.515281 kubelet[1913]: E0910 00:40:16.515018 1913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:40:16.520210 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:40:16.520439 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:40:17.062294 containerd[1478]: time="2025-09-10T00:40:17.062064732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:17.063456 containerd[1478]: time="2025-09-10T00:40:17.062415309Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 10 00:40:17.066052 containerd[1478]: time="2025-09-10T00:40:17.065990935Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:17.073517 containerd[1478]: time="2025-09-10T00:40:17.073425984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:17.078758 containerd[1478]: time="2025-09-10T00:40:17.078683751Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 2.981279757s" Sep 10 00:40:17.078960 containerd[1478]: time="2025-09-10T00:40:17.078905601Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 10 00:40:17.081914 containerd[1478]: time="2025-09-10T00:40:17.081870219Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 10 00:40:19.698235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1928696106.mount: Deactivated successfully. Sep 10 00:40:20.549522 containerd[1478]: time="2025-09-10T00:40:20.549433411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:20.550217 containerd[1478]: time="2025-09-10T00:40:20.550125564Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 10 00:40:20.551685 containerd[1478]: time="2025-09-10T00:40:20.551632804Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:20.557320 containerd[1478]: time="2025-09-10T00:40:20.557240998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:20.558113 containerd[1478]: time="2025-09-10T00:40:20.558051193Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 3.476126216s" Sep 10 00:40:20.558113 containerd[1478]: time="2025-09-10T00:40:20.558103255Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 10 00:40:20.558866 containerd[1478]: time="2025-09-10T00:40:20.558833041Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 10 00:40:21.228792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311242836.mount: Deactivated successfully. Sep 10 00:40:23.458002 containerd[1478]: time="2025-09-10T00:40:23.457918134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:23.459141 containerd[1478]: time="2025-09-10T00:40:23.459096174Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 10 00:40:23.460392 containerd[1478]: time="2025-09-10T00:40:23.460324904Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:23.463699 containerd[1478]: time="2025-09-10T00:40:23.463655876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:23.464996 containerd[1478]: time="2025-09-10T00:40:23.464834989Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.905947224s" Sep 10 00:40:23.464996 containerd[1478]: time="2025-09-10T00:40:23.464985102Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 10 00:40:23.465733 containerd[1478]: time="2025-09-10T00:40:23.465685415Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:40:24.057229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327149883.mount: Deactivated successfully. Sep 10 00:40:24.064461 containerd[1478]: time="2025-09-10T00:40:24.064389572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:24.065319 containerd[1478]: time="2025-09-10T00:40:24.065230465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 10 00:40:24.066567 containerd[1478]: time="2025-09-10T00:40:24.066491771Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:24.069671 containerd[1478]: time="2025-09-10T00:40:24.069626840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:24.070475 containerd[1478]: time="2025-09-10T00:40:24.070435616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 604.705665ms" Sep 10 00:40:24.070580 containerd[1478]: time="2025-09-10T00:40:24.070479536Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 10 00:40:24.071260 containerd[1478]: time="2025-09-10T00:40:24.071230653Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 10 00:40:24.738023 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount752172613.mount: Deactivated successfully. Sep 10 00:40:26.532681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 10 00:40:26.541926 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:40:26.823593 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:40:26.828726 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:40:27.562207 kubelet[2049]: E0910 00:40:27.559529 2049 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:40:27.564382 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:40:27.564664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:40:27.653209 containerd[1478]: time="2025-09-10T00:40:27.653131680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:27.653966 containerd[1478]: time="2025-09-10T00:40:27.653875266Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 10 00:40:27.655091 containerd[1478]: time="2025-09-10T00:40:27.655051119Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:27.658267 containerd[1478]: time="2025-09-10T00:40:27.658219240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:27.659411 containerd[1478]: time="2025-09-10T00:40:27.659376414Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.588116681s" Sep 10 00:40:27.659453 containerd[1478]: time="2025-09-10T00:40:27.659411755Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 10 00:40:30.207897 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:40:30.222583 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:40:30.251328 systemd[1]: Reloading requested from client PID 2091 ('systemctl') (unit session-9.scope)... Sep 10 00:40:30.251350 systemd[1]: Reloading... Sep 10 00:40:30.354225 zram_generator::config[2127]: No configuration found. Sep 10 00:40:30.679946 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:40:30.763213 systemd[1]: Reloading finished in 511 ms. Sep 10 00:40:30.814915 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 00:40:30.815063 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 00:40:30.815417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:40:30.817462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:40:31.001430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:40:31.007232 (kubelet)[2179]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:40:31.054506 kubelet[2179]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:40:31.054506 kubelet[2179]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:40:31.054506 kubelet[2179]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:40:31.055021 kubelet[2179]: I0910 00:40:31.054543 2179 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:40:31.479011 kubelet[2179]: I0910 00:40:31.478925 2179 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 00:40:31.479011 kubelet[2179]: I0910 00:40:31.478965 2179 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:40:31.479258 kubelet[2179]: I0910 00:40:31.479233 2179 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 00:40:31.573630 kubelet[2179]: I0910 00:40:31.573552 2179 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:40:31.575264 kubelet[2179]: E0910 00:40:31.575172 2179 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 10 00:40:31.581758 kubelet[2179]: E0910 00:40:31.581731 2179 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:40:31.581822 kubelet[2179]: I0910 00:40:31.581761 2179 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:40:31.588275 kubelet[2179]: I0910 00:40:31.588148 2179 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:40:31.588580 kubelet[2179]: I0910 00:40:31.588536 2179 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:40:31.588769 kubelet[2179]: I0910 00:40:31.588571 2179 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:40:31.588899 kubelet[2179]: I0910 00:40:31.588788 2179 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:40:31.588899 kubelet[2179]: I0910 00:40:31.588803 2179 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 00:40:31.589982 kubelet[2179]: I0910 00:40:31.589949 2179 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:40:31.595902 kubelet[2179]: I0910 00:40:31.595854 2179 kubelet.go:480] "Attempting to sync node with API server" Sep 10 00:40:31.595902 kubelet[2179]: I0910 00:40:31.595876 2179 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:40:31.595902 kubelet[2179]: I0910 00:40:31.595908 2179 kubelet.go:386] "Adding apiserver pod source" Sep 10 00:40:31.607731 kubelet[2179]: I0910 00:40:31.607666 2179 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:40:31.621147 kubelet[2179]: I0910 00:40:31.621088 2179 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:40:31.622007 kubelet[2179]: I0910 00:40:31.621957 2179 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 00:40:31.623621 kubelet[2179]: E0910 00:40:31.623587 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 00:40:31.623621 kubelet[2179]: E0910 00:40:31.623618 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 00:40:31.623746 kubelet[2179]: W0910 00:40:31.623728 2179 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:40:31.627799 kubelet[2179]: I0910 00:40:31.627774 2179 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:40:31.627860 kubelet[2179]: I0910 00:40:31.627849 2179 server.go:1289] "Started kubelet" Sep 10 00:40:31.627994 kubelet[2179]: I0910 00:40:31.627933 2179 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:40:31.639789 kubelet[2179]: I0910 00:40:31.639730 2179 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:40:31.640221 kubelet[2179]: I0910 00:40:31.640166 2179 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:40:31.642664 kubelet[2179]: I0910 00:40:31.642614 2179 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:40:31.644499 kubelet[2179]: I0910 00:40:31.644453 2179 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:40:31.645032 kubelet[2179]: I0910 00:40:31.645012 2179 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:40:31.645142 kubelet[2179]: E0910 00:40:31.645111 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:31.647874 kubelet[2179]: I0910 00:40:31.646791 2179 server.go:317] "Adding debug handlers to kubelet server" Sep 10 00:40:31.647874 kubelet[2179]: I0910 00:40:31.647076 2179 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:40:31.647874 kubelet[2179]: I0910 00:40:31.647163 2179 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:40:31.655919 kubelet[2179]: I0910 00:40:31.655885 2179 factory.go:223] Registration of the systemd container factory successfully Sep 10 00:40:31.656059 kubelet[2179]: I0910 00:40:31.656010 2179 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:40:31.660858 kubelet[2179]: E0910 00:40:31.660652 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="200ms" Sep 10 00:40:31.660858 kubelet[2179]: E0910 00:40:31.655566 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 00:40:31.662272 kubelet[2179]: I0910 00:40:31.661802 2179 factory.go:223] Registration of the containerd container factory successfully Sep 10 00:40:31.665008 kubelet[2179]: E0910 00:40:31.662035 2179 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.90:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.90:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c4f3f26abded default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:40:31.627804141 +0000 UTC m=+0.615586577,LastTimestamp:2025-09-10 00:40:31.627804141 +0000 UTC m=+0.615586577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:40:31.665616 kubelet[2179]: E0910 00:40:31.665550 2179 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:40:31.678569 kubelet[2179]: I0910 00:40:31.678365 2179 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:40:31.678569 kubelet[2179]: I0910 00:40:31.678402 2179 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:40:31.678569 kubelet[2179]: I0910 00:40:31.678423 2179 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:40:31.681095 kubelet[2179]: I0910 00:40:31.681051 2179 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 00:40:31.683081 kubelet[2179]: I0910 00:40:31.683050 2179 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 00:40:31.683219 kubelet[2179]: I0910 00:40:31.683102 2179 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 00:40:31.683219 kubelet[2179]: I0910 00:40:31.683135 2179 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:40:31.683219 kubelet[2179]: I0910 00:40:31.683144 2179 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 00:40:31.683326 kubelet[2179]: E0910 00:40:31.683210 2179 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:40:31.684103 kubelet[2179]: E0910 00:40:31.684059 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 00:40:31.746307 kubelet[2179]: E0910 00:40:31.746063 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:31.783630 kubelet[2179]: E0910 00:40:31.783571 2179 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:40:31.846902 kubelet[2179]: E0910 00:40:31.846837 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:31.861836 kubelet[2179]: E0910 00:40:31.861804 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="400ms" Sep 10 00:40:31.947097 kubelet[2179]: E0910 00:40:31.946989 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:31.984489 kubelet[2179]: E0910 00:40:31.984406 2179 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:40:32.047865 kubelet[2179]: E0910 00:40:32.047797 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:32.148227 kubelet[2179]: E0910 00:40:32.148110 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:32.248940 kubelet[2179]: E0910 00:40:32.248844 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:32.262684 kubelet[2179]: E0910 00:40:32.262631 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="800ms" Sep 10 00:40:32.939308 kubelet[2179]: E0910 00:40:32.349046 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:32.939308 kubelet[2179]: E0910 00:40:32.385431 2179 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:40:32.939308 kubelet[2179]: E0910 00:40:32.449159 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:32.939308 kubelet[2179]: E0910 00:40:32.536050 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 00:40:32.939308 kubelet[2179]: E0910 00:40:32.549595 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:32.939308 kubelet[2179]: E0910 00:40:32.650447 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:32.939308 kubelet[2179]: E0910 00:40:32.679412 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 00:40:32.939308 kubelet[2179]: E0910 00:40:32.751221 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:32.939308 kubelet[2179]: E0910 00:40:32.808083 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 00:40:32.939308 kubelet[2179]: E0910 00:40:32.851806 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:32.939840 kubelet[2179]: E0910 00:40:32.926803 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 00:40:32.952581 kubelet[2179]: E0910 00:40:32.952502 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:33.036777 kubelet[2179]: I0910 00:40:33.036708 2179 policy_none.go:49] "None policy: Start" Sep 10 00:40:33.036777 kubelet[2179]: I0910 00:40:33.036763 2179 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:40:33.036777 kubelet[2179]: I0910 00:40:33.036780 2179 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:40:33.053252 kubelet[2179]: E0910 00:40:33.053171 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:33.064164 kubelet[2179]: E0910 00:40:33.064115 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="1.6s" Sep 10 00:40:33.153352 kubelet[2179]: E0910 00:40:33.153281 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:33.185717 kubelet[2179]: E0910 00:40:33.185623 2179 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 00:40:33.254447 kubelet[2179]: E0910 00:40:33.254292 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:33.284486 kubelet[2179]: E0910 00:40:33.284287 2179 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.90:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.90:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c4f3f26abded default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:40:31.627804141 +0000 UTC m=+0.615586577,LastTimestamp:2025-09-10 00:40:31.627804141 +0000 UTC m=+0.615586577,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:40:33.354716 kubelet[2179]: E0910 00:40:33.354626 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:33.442472 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 00:40:33.454893 kubelet[2179]: E0910 00:40:33.454821 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:33.458596 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 00:40:33.465344 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 00:40:33.478165 kubelet[2179]: E0910 00:40:33.478076 2179 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 00:40:33.478534 kubelet[2179]: I0910 00:40:33.478493 2179 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:40:33.478597 kubelet[2179]: I0910 00:40:33.478533 2179 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:40:33.479008 kubelet[2179]: I0910 00:40:33.478927 2179 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:40:33.479904 kubelet[2179]: E0910 00:40:33.479866 2179 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:40:33.479975 kubelet[2179]: E0910 00:40:33.479943 2179 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:40:33.582671 kubelet[2179]: I0910 00:40:33.582506 2179 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:40:33.584556 kubelet[2179]: E0910 00:40:33.584487 2179 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Sep 10 00:40:33.678574 kubelet[2179]: E0910 00:40:33.678489 2179 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.90:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 10 00:40:33.786380 kubelet[2179]: I0910 00:40:33.786315 2179 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:40:33.786798 kubelet[2179]: E0910 00:40:33.786756 2179 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Sep 10 00:40:34.189615 kubelet[2179]: I0910 00:40:34.189520 2179 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:40:34.190143 kubelet[2179]: E0910 00:40:34.190096 2179 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Sep 10 00:40:34.210860 kubelet[2179]: E0910 00:40:34.210794 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.90:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 00:40:34.664753 kubelet[2179]: E0910 00:40:34.664674 2179 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.90:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.90:6443: connect: connection refused" interval="3.2s" Sep 10 00:40:34.865815 kubelet[2179]: I0910 00:40:34.865743 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/531722986d9b6314180e27118e4675e8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"531722986d9b6314180e27118e4675e8\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:34.865815 kubelet[2179]: I0910 00:40:34.865799 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/531722986d9b6314180e27118e4675e8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"531722986d9b6314180e27118e4675e8\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:34.865815 kubelet[2179]: I0910 00:40:34.865829 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/531722986d9b6314180e27118e4675e8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"531722986d9b6314180e27118e4675e8\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:34.992238 kubelet[2179]: I0910 00:40:34.992064 2179 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:40:34.992547 kubelet[2179]: E0910 00:40:34.992495 2179 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Sep 10 00:40:35.006450 systemd[1]: Created slice kubepods-burstable-pod531722986d9b6314180e27118e4675e8.slice - libcontainer container kubepods-burstable-pod531722986d9b6314180e27118e4675e8.slice. Sep 10 00:40:35.016321 kubelet[2179]: E0910 00:40:35.016262 2179 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:40:35.016878 kubelet[2179]: E0910 00:40:35.016848 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:35.017725 containerd[1478]: time="2025-09-10T00:40:35.017641873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:531722986d9b6314180e27118e4675e8,Namespace:kube-system,Attempt:0,}" Sep 10 00:40:35.020416 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 10 00:40:35.030999 kubelet[2179]: E0910 00:40:35.030927 2179 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:40:35.034730 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 10 00:40:35.037531 kubelet[2179]: E0910 00:40:35.037504 2179 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:40:35.067011 kubelet[2179]: I0910 00:40:35.066925 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:35.067011 kubelet[2179]: I0910 00:40:35.067004 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:35.067256 kubelet[2179]: I0910 00:40:35.067033 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:35.067256 kubelet[2179]: I0910 00:40:35.067067 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:35.067323 kubelet[2179]: I0910 00:40:35.067264 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:35.070045 kubelet[2179]: I0910 00:40:35.067505 2179 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:40:35.310903 kubelet[2179]: E0910 00:40:35.310812 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.90:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 00:40:35.331647 kubelet[2179]: E0910 00:40:35.331560 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:35.332493 containerd[1478]: time="2025-09-10T00:40:35.332396401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 10 00:40:35.339080 kubelet[2179]: E0910 00:40:35.339010 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:35.339435 kubelet[2179]: E0910 00:40:35.339376 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.90:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 00:40:35.339747 containerd[1478]: time="2025-09-10T00:40:35.339699498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 10 00:40:35.372242 kubelet[2179]: E0910 00:40:35.372155 2179 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.90:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.90:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 00:40:35.633739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349944136.mount: Deactivated successfully. Sep 10 00:40:35.642837 containerd[1478]: time="2025-09-10T00:40:35.642757087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:40:35.644143 containerd[1478]: time="2025-09-10T00:40:35.644057853Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:40:35.644952 containerd[1478]: time="2025-09-10T00:40:35.644851969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 10 00:40:35.646035 containerd[1478]: time="2025-09-10T00:40:35.645969724Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:40:35.647184 containerd[1478]: time="2025-09-10T00:40:35.647122793Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:40:35.648275 containerd[1478]: time="2025-09-10T00:40:35.648231748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 10 00:40:35.649506 containerd[1478]: time="2025-09-10T00:40:35.649383945Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:40:35.654263 containerd[1478]: time="2025-09-10T00:40:35.654175391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:40:35.655555 containerd[1478]: time="2025-09-10T00:40:35.655480979Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 322.977286ms" Sep 10 00:40:35.659067 containerd[1478]: time="2025-09-10T00:40:35.658981773Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 641.187201ms" Sep 10 00:40:35.660338 containerd[1478]: time="2025-09-10T00:40:35.660293424Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 320.48875ms" Sep 10 00:40:35.905630 containerd[1478]: time="2025-09-10T00:40:35.904915193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:40:35.905630 containerd[1478]: time="2025-09-10T00:40:35.905031426Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:40:35.905630 containerd[1478]: time="2025-09-10T00:40:35.905051884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:40:35.905630 containerd[1478]: time="2025-09-10T00:40:35.905164399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:40:35.908376 containerd[1478]: time="2025-09-10T00:40:35.908070054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:40:35.908376 containerd[1478]: time="2025-09-10T00:40:35.908243271Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:40:35.908376 containerd[1478]: time="2025-09-10T00:40:35.908264100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:40:35.909744 containerd[1478]: time="2025-09-10T00:40:35.909655299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:40:35.911022 containerd[1478]: time="2025-09-10T00:40:35.910916883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:40:35.911022 containerd[1478]: time="2025-09-10T00:40:35.910962209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:40:35.911176 containerd[1478]: time="2025-09-10T00:40:35.910989573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:40:35.912018 containerd[1478]: time="2025-09-10T00:40:35.911930283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:40:35.995358 systemd[1]: Started cri-containerd-5ca6c5a1bfc722d838f79f7cd84d44cee0e8ec17c1c8b83202308dfdf6a4cc48.scope - libcontainer container 5ca6c5a1bfc722d838f79f7cd84d44cee0e8ec17c1c8b83202308dfdf6a4cc48. Sep 10 00:40:36.043344 systemd[1]: Started cri-containerd-06061793226f0994f127a8c8166bc6b82a1baca0162e863898b8cedbc4d8a174.scope - libcontainer container 06061793226f0994f127a8c8166bc6b82a1baca0162e863898b8cedbc4d8a174. Sep 10 00:40:36.045494 systemd[1]: Started cri-containerd-5262084b53e8d1587c979bea19790c43f9b0a6277719ae56e09cdc63e86080ba.scope - libcontainer container 5262084b53e8d1587c979bea19790c43f9b0a6277719ae56e09cdc63e86080ba. Sep 10 00:40:36.119138 containerd[1478]: time="2025-09-10T00:40:36.119011824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:531722986d9b6314180e27118e4675e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"06061793226f0994f127a8c8166bc6b82a1baca0162e863898b8cedbc4d8a174\"" Sep 10 00:40:36.123556 kubelet[2179]: E0910 00:40:36.123359 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:36.125710 containerd[1478]: time="2025-09-10T00:40:36.125654512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"5262084b53e8d1587c979bea19790c43f9b0a6277719ae56e09cdc63e86080ba\"" Sep 10 00:40:36.126406 kubelet[2179]: E0910 00:40:36.126379 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:36.138930 containerd[1478]: time="2025-09-10T00:40:36.138903344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ca6c5a1bfc722d838f79f7cd84d44cee0e8ec17c1c8b83202308dfdf6a4cc48\"" Sep 10 00:40:36.139538 kubelet[2179]: E0910 00:40:36.139511 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:36.320875 containerd[1478]: time="2025-09-10T00:40:36.320770133Z" level=info msg="CreateContainer within sandbox \"06061793226f0994f127a8c8166bc6b82a1baca0162e863898b8cedbc4d8a174\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:40:36.384551 containerd[1478]: time="2025-09-10T00:40:36.384484230Z" level=info msg="CreateContainer within sandbox \"5262084b53e8d1587c979bea19790c43f9b0a6277719ae56e09cdc63e86080ba\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:40:36.445839 update_engine[1456]: I20250910 00:40:36.445706 1456 update_attempter.cc:509] Updating boot flags... Sep 10 00:40:36.524612 containerd[1478]: time="2025-09-10T00:40:36.524546196Z" level=info msg="CreateContainer within sandbox \"5ca6c5a1bfc722d838f79f7cd84d44cee0e8ec17c1c8b83202308dfdf6a4cc48\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:40:36.545243 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2353) Sep 10 00:40:36.583245 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2357) Sep 10 00:40:36.595379 kubelet[2179]: I0910 00:40:36.594909 2179 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:40:36.596478 kubelet[2179]: E0910 00:40:36.596233 2179 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.90:6443/api/v1/nodes\": dial tcp 10.0.0.90:6443: connect: connection refused" node="localhost" Sep 10 00:40:36.677220 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2357) Sep 10 00:40:37.531472 containerd[1478]: time="2025-09-10T00:40:37.531322694Z" level=info msg="CreateContainer within sandbox \"06061793226f0994f127a8c8166bc6b82a1baca0162e863898b8cedbc4d8a174\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9820b607482e0ac64fa1c2a6359139cb2f4ee797794a2f46365e66867e898039\"" Sep 10 00:40:37.532479 containerd[1478]: time="2025-09-10T00:40:37.532444809Z" level=info msg="StartContainer for \"9820b607482e0ac64fa1c2a6359139cb2f4ee797794a2f46365e66867e898039\"" Sep 10 00:40:37.570443 systemd[1]: Started cri-containerd-9820b607482e0ac64fa1c2a6359139cb2f4ee797794a2f46365e66867e898039.scope - libcontainer container 9820b607482e0ac64fa1c2a6359139cb2f4ee797794a2f46365e66867e898039. Sep 10 00:40:37.715415 containerd[1478]: time="2025-09-10T00:40:37.715335912Z" level=info msg="CreateContainer within sandbox \"5262084b53e8d1587c979bea19790c43f9b0a6277719ae56e09cdc63e86080ba\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4e4f5814d759e460b8b6802a9e7cade3872a28fbd8a1cfd84cda6627cac34a00\"" Sep 10 00:40:37.715415 containerd[1478]: time="2025-09-10T00:40:37.715397294Z" level=info msg="StartContainer for \"9820b607482e0ac64fa1c2a6359139cb2f4ee797794a2f46365e66867e898039\" returns successfully" Sep 10 00:40:37.716290 containerd[1478]: time="2025-09-10T00:40:37.716180438Z" level=info msg="StartContainer for \"4e4f5814d759e460b8b6802a9e7cade3872a28fbd8a1cfd84cda6627cac34a00\"" Sep 10 00:40:37.742466 systemd[1]: run-containerd-runc-k8s.io-4e4f5814d759e460b8b6802a9e7cade3872a28fbd8a1cfd84cda6627cac34a00-runc.saQnt4.mount: Deactivated successfully. Sep 10 00:40:37.753382 systemd[1]: Started cri-containerd-4e4f5814d759e460b8b6802a9e7cade3872a28fbd8a1cfd84cda6627cac34a00.scope - libcontainer container 4e4f5814d759e460b8b6802a9e7cade3872a28fbd8a1cfd84cda6627cac34a00. Sep 10 00:40:37.909861 containerd[1478]: time="2025-09-10T00:40:37.909561155Z" level=info msg="StartContainer for \"4e4f5814d759e460b8b6802a9e7cade3872a28fbd8a1cfd84cda6627cac34a00\" returns successfully" Sep 10 00:40:37.909861 containerd[1478]: time="2025-09-10T00:40:37.909569052Z" level=info msg="CreateContainer within sandbox \"5ca6c5a1bfc722d838f79f7cd84d44cee0e8ec17c1c8b83202308dfdf6a4cc48\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"25f7a786d4c2d9e2395e568fe2f4fc0c309b7ba7b9d8588e93ecc811649cc5ea\"" Sep 10 00:40:37.910604 containerd[1478]: time="2025-09-10T00:40:37.910577105Z" level=info msg="StartContainer for \"25f7a786d4c2d9e2395e568fe2f4fc0c309b7ba7b9d8588e93ecc811649cc5ea\"" Sep 10 00:40:37.979460 systemd[1]: Started cri-containerd-25f7a786d4c2d9e2395e568fe2f4fc0c309b7ba7b9d8588e93ecc811649cc5ea.scope - libcontainer container 25f7a786d4c2d9e2395e568fe2f4fc0c309b7ba7b9d8588e93ecc811649cc5ea. Sep 10 00:40:38.145721 containerd[1478]: time="2025-09-10T00:40:38.145648239Z" level=info msg="StartContainer for \"25f7a786d4c2d9e2395e568fe2f4fc0c309b7ba7b9d8588e93ecc811649cc5ea\" returns successfully" Sep 10 00:40:38.723391 kubelet[2179]: E0910 00:40:38.723341 2179 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:40:38.724397 kubelet[2179]: E0910 00:40:38.723515 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:38.726714 kubelet[2179]: E0910 00:40:38.726678 2179 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:40:38.726822 kubelet[2179]: E0910 00:40:38.726801 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:38.727053 kubelet[2179]: E0910 00:40:38.727026 2179 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:40:38.727209 kubelet[2179]: E0910 00:40:38.727171 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:39.335032 kubelet[2179]: E0910 00:40:39.334624 2179 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:40:39.730051 kubelet[2179]: E0910 00:40:39.729910 2179 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:40:39.730613 kubelet[2179]: E0910 00:40:39.730063 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:39.730613 kubelet[2179]: E0910 00:40:39.730094 2179 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:40:39.730613 kubelet[2179]: E0910 00:40:39.730231 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:39.730613 kubelet[2179]: E0910 00:40:39.730345 2179 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:40:39.730613 kubelet[2179]: E0910 00:40:39.730422 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:39.775975 kubelet[2179]: E0910 00:40:39.775925 2179 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 10 00:40:39.798740 kubelet[2179]: I0910 00:40:39.798691 2179 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:40:39.807986 kubelet[2179]: I0910 00:40:39.807939 2179 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:40:39.807986 kubelet[2179]: E0910 00:40:39.807976 2179 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 00:40:39.818960 kubelet[2179]: E0910 00:40:39.818906 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:39.919739 kubelet[2179]: E0910 00:40:39.919665 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:40.019966 kubelet[2179]: E0910 00:40:40.019799 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:40.120179 kubelet[2179]: E0910 00:40:40.120055 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:40.220832 kubelet[2179]: E0910 00:40:40.220719 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:40.322228 kubelet[2179]: E0910 00:40:40.321636 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:40.421876 kubelet[2179]: E0910 00:40:40.421824 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:40.523073 kubelet[2179]: E0910 00:40:40.523004 2179 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:40:40.547360 kubelet[2179]: I0910 00:40:40.547289 2179 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:40:40.560872 kubelet[2179]: I0910 00:40:40.560291 2179 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:40.567447 kubelet[2179]: I0910 00:40:40.567402 2179 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:40.627875 kubelet[2179]: I0910 00:40:40.627729 2179 apiserver.go:52] "Watching apiserver" Sep 10 00:40:40.648389 kubelet[2179]: I0910 00:40:40.648329 2179 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:40:40.728436 kubelet[2179]: I0910 00:40:40.728395 2179 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:40.729346 kubelet[2179]: E0910 00:40:40.728714 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:40.729346 kubelet[2179]: I0910 00:40:40.728863 2179 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:40:40.735598 kubelet[2179]: E0910 00:40:40.735536 2179 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 00:40:40.736062 kubelet[2179]: E0910 00:40:40.735751 2179 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:40.736062 kubelet[2179]: E0910 00:40:40.735826 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:40.736062 kubelet[2179]: E0910 00:40:40.736047 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:41.709468 kubelet[2179]: I0910 00:40:41.707631 2179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.707593924 podStartE2EDuration="1.707593924s" podCreationTimestamp="2025-09-10 00:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:40:41.706168659 +0000 UTC m=+10.693951105" watchObservedRunningTime="2025-09-10 00:40:41.707593924 +0000 UTC m=+10.695376360" Sep 10 00:40:41.732594 kubelet[2179]: E0910 00:40:41.732551 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:41.732768 kubelet[2179]: E0910 00:40:41.732694 2179 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:41.734828 kubelet[2179]: I0910 00:40:41.734779 2179 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.734762977 podStartE2EDuration="1.734762977s" podCreationTimestamp="2025-09-10 00:40:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:40:41.72492616 +0000 UTC m=+10.712708616" watchObservedRunningTime="2025-09-10 00:40:41.734762977 +0000 UTC m=+10.722545413" Sep 10 00:40:42.081240 systemd[1]: Reloading requested from client PID 2485 ('systemctl') (unit session-9.scope)... Sep 10 00:40:42.081259 systemd[1]: Reloading... Sep 10 00:40:42.179248 zram_generator::config[2524]: No configuration found. Sep 10 00:40:42.362186 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 00:40:42.467400 systemd[1]: Reloading finished in 385 ms. Sep 10 00:40:42.511922 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:40:42.532905 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:40:42.533275 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:40:42.533338 systemd[1]: kubelet.service: Consumed 1.282s CPU time, 133.0M memory peak, 0B memory swap peak. Sep 10 00:40:42.543658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:40:42.720298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:40:42.727179 (kubelet)[2569]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:40:42.780946 kubelet[2569]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:40:42.780946 kubelet[2569]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:40:42.780946 kubelet[2569]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:40:42.781476 kubelet[2569]: I0910 00:40:42.781057 2569 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:40:42.793274 kubelet[2569]: I0910 00:40:42.793177 2569 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 00:40:42.793274 kubelet[2569]: I0910 00:40:42.793258 2569 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:40:42.793648 kubelet[2569]: I0910 00:40:42.793612 2569 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 00:40:42.795340 kubelet[2569]: I0910 00:40:42.795296 2569 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 10 00:40:42.803252 kubelet[2569]: I0910 00:40:42.803182 2569 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:40:42.812287 kubelet[2569]: E0910 00:40:42.810337 2569 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 10 00:40:42.812287 kubelet[2569]: I0910 00:40:42.810373 2569 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 10 00:40:42.815872 kubelet[2569]: I0910 00:40:42.815818 2569 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:40:42.816203 kubelet[2569]: I0910 00:40:42.816144 2569 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:40:42.816376 kubelet[2569]: I0910 00:40:42.816183 2569 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:40:42.816490 kubelet[2569]: I0910 00:40:42.816379 2569 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:40:42.816490 kubelet[2569]: I0910 00:40:42.816391 2569 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 00:40:42.816490 kubelet[2569]: I0910 00:40:42.816461 2569 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:40:42.816667 kubelet[2569]: I0910 00:40:42.816645 2569 kubelet.go:480] "Attempting to sync node with API server" Sep 10 00:40:42.816667 kubelet[2569]: I0910 00:40:42.816663 2569 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:40:42.816753 kubelet[2569]: I0910 00:40:42.816697 2569 kubelet.go:386] "Adding apiserver pod source" Sep 10 00:40:42.817136 kubelet[2569]: I0910 00:40:42.817097 2569 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:40:42.818724 kubelet[2569]: I0910 00:40:42.818692 2569 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 10 00:40:42.820882 kubelet[2569]: I0910 00:40:42.819549 2569 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 00:40:42.841789 kubelet[2569]: I0910 00:40:42.841746 2569 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:40:42.841983 kubelet[2569]: I0910 00:40:42.841959 2569 server.go:1289] "Started kubelet" Sep 10 00:40:42.842287 kubelet[2569]: I0910 00:40:42.842242 2569 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:40:42.842380 kubelet[2569]: I0910 00:40:42.842310 2569 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:40:42.843066 kubelet[2569]: I0910 00:40:42.843007 2569 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:40:42.843977 kubelet[2569]: I0910 00:40:42.843891 2569 server.go:317] "Adding debug handlers to kubelet server" Sep 10 00:40:42.847696 kubelet[2569]: I0910 00:40:42.847643 2569 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:40:42.849537 kubelet[2569]: I0910 00:40:42.848759 2569 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:40:42.849537 kubelet[2569]: I0910 00:40:42.849176 2569 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:40:42.851719 kubelet[2569]: I0910 00:40:42.851682 2569 factory.go:223] Registration of the systemd container factory successfully Sep 10 00:40:42.851873 kubelet[2569]: I0910 00:40:42.851830 2569 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:40:42.852874 kubelet[2569]: I0910 00:40:42.852823 2569 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:40:42.853212 kubelet[2569]: I0910 00:40:42.853166 2569 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:40:42.856356 kubelet[2569]: I0910 00:40:42.855529 2569 factory.go:223] Registration of the containerd container factory successfully Sep 10 00:40:42.858627 kubelet[2569]: E0910 00:40:42.858586 2569 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:40:42.873079 kubelet[2569]: I0910 00:40:42.872509 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 00:40:42.874604 kubelet[2569]: I0910 00:40:42.874568 2569 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 00:40:42.874658 kubelet[2569]: I0910 00:40:42.874632 2569 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 00:40:42.874724 kubelet[2569]: I0910 00:40:42.874669 2569 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:40:42.874724 kubelet[2569]: I0910 00:40:42.874680 2569 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 00:40:42.874793 kubelet[2569]: E0910 00:40:42.874746 2569 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:40:42.895887 kubelet[2569]: I0910 00:40:42.895822 2569 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:40:42.895887 kubelet[2569]: I0910 00:40:42.895850 2569 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:40:42.895887 kubelet[2569]: I0910 00:40:42.895892 2569 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:40:42.896086 kubelet[2569]: I0910 00:40:42.896059 2569 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:40:42.896086 kubelet[2569]: I0910 00:40:42.896069 2569 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:40:42.896086 kubelet[2569]: I0910 00:40:42.896086 2569 policy_none.go:49] "None policy: Start" Sep 10 00:40:42.896154 kubelet[2569]: I0910 00:40:42.896096 2569 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:40:42.896154 kubelet[2569]: I0910 00:40:42.896106 2569 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:40:42.896238 kubelet[2569]: I0910 00:40:42.896226 2569 state_mem.go:75] "Updated machine memory state" Sep 10 00:40:42.902545 kubelet[2569]: E0910 00:40:42.902510 2569 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 00:40:42.902930 kubelet[2569]: I0910 00:40:42.902715 2569 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:40:42.902930 kubelet[2569]: I0910 00:40:42.902732 2569 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:40:42.903019 kubelet[2569]: I0910 00:40:42.902991 2569 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:40:42.904225 kubelet[2569]: E0910 00:40:42.904174 2569 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:40:42.978412 kubelet[2569]: I0910 00:40:42.976685 2569 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:42.978412 kubelet[2569]: I0910 00:40:42.976883 2569 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:40:42.978412 kubelet[2569]: I0910 00:40:42.977597 2569 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:42.984167 kubelet[2569]: E0910 00:40:42.983926 2569 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 00:40:42.984167 kubelet[2569]: E0910 00:40:42.984070 2569 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:42.991489 kubelet[2569]: E0910 00:40:42.991444 2569 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:43.015939 kubelet[2569]: I0910 00:40:43.015871 2569 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:40:43.025358 kubelet[2569]: I0910 00:40:43.025304 2569 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 00:40:43.025519 kubelet[2569]: I0910 00:40:43.025432 2569 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:40:43.154937 kubelet[2569]: I0910 00:40:43.154861 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/531722986d9b6314180e27118e4675e8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"531722986d9b6314180e27118e4675e8\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:43.154937 kubelet[2569]: I0910 00:40:43.154922 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/531722986d9b6314180e27118e4675e8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"531722986d9b6314180e27118e4675e8\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:43.156421 kubelet[2569]: I0910 00:40:43.154971 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/531722986d9b6314180e27118e4675e8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"531722986d9b6314180e27118e4675e8\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:43.156421 kubelet[2569]: I0910 00:40:43.155036 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:43.156421 kubelet[2569]: I0910 00:40:43.155060 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:43.156421 kubelet[2569]: I0910 00:40:43.155085 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:43.156421 kubelet[2569]: I0910 00:40:43.155111 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:43.156592 kubelet[2569]: I0910 00:40:43.155135 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:40:43.156592 kubelet[2569]: I0910 00:40:43.155158 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:43.284488 kubelet[2569]: E0910 00:40:43.284422 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:43.284697 kubelet[2569]: E0910 00:40:43.284424 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:43.292242 kubelet[2569]: E0910 00:40:43.292177 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:43.818432 kubelet[2569]: I0910 00:40:43.818379 2569 apiserver.go:52] "Watching apiserver" Sep 10 00:40:43.853740 kubelet[2569]: I0910 00:40:43.853685 2569 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:40:43.886262 kubelet[2569]: I0910 00:40:43.886009 2569 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:40:43.886444 kubelet[2569]: I0910 00:40:43.886317 2569 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:43.887354 kubelet[2569]: I0910 00:40:43.886109 2569 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:43.984013 kubelet[2569]: E0910 00:40:43.983940 2569 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 00:40:43.984226 kubelet[2569]: E0910 00:40:43.984172 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:44.014263 kubelet[2569]: E0910 00:40:44.012549 2569 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:40:44.014263 kubelet[2569]: E0910 00:40:44.012723 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:44.014263 kubelet[2569]: E0910 00:40:44.013725 2569 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:40:44.014263 kubelet[2569]: E0910 00:40:44.013926 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:44.890290 kubelet[2569]: E0910 00:40:44.890240 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:44.890290 kubelet[2569]: E0910 00:40:44.890240 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:44.891126 kubelet[2569]: E0910 00:40:44.890494 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:45.892041 kubelet[2569]: E0910 00:40:45.891977 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:47.361805 kubelet[2569]: I0910 00:40:47.361754 2569 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:40:47.362421 kubelet[2569]: I0910 00:40:47.362361 2569 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:40:47.362467 containerd[1478]: time="2025-09-10T00:40:47.362152375Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:40:48.258665 systemd[1]: Created slice kubepods-besteffort-podf76c361f_4203_47c2_9af7_7e9e07941a48.slice - libcontainer container kubepods-besteffort-podf76c361f_4203_47c2_9af7_7e9e07941a48.slice. Sep 10 00:40:48.292479 kubelet[2569]: I0910 00:40:48.292375 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f76c361f-4203-47c2-9af7-7e9e07941a48-kube-proxy\") pod \"kube-proxy-7pkp7\" (UID: \"f76c361f-4203-47c2-9af7-7e9e07941a48\") " pod="kube-system/kube-proxy-7pkp7" Sep 10 00:40:48.292479 kubelet[2569]: I0910 00:40:48.292429 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f76c361f-4203-47c2-9af7-7e9e07941a48-xtables-lock\") pod \"kube-proxy-7pkp7\" (UID: \"f76c361f-4203-47c2-9af7-7e9e07941a48\") " pod="kube-system/kube-proxy-7pkp7" Sep 10 00:40:48.292726 kubelet[2569]: I0910 00:40:48.292498 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f76c361f-4203-47c2-9af7-7e9e07941a48-lib-modules\") pod \"kube-proxy-7pkp7\" (UID: \"f76c361f-4203-47c2-9af7-7e9e07941a48\") " pod="kube-system/kube-proxy-7pkp7" Sep 10 00:40:48.292726 kubelet[2569]: I0910 00:40:48.292522 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxvlt\" (UniqueName: \"kubernetes.io/projected/f76c361f-4203-47c2-9af7-7e9e07941a48-kube-api-access-jxvlt\") pod \"kube-proxy-7pkp7\" (UID: \"f76c361f-4203-47c2-9af7-7e9e07941a48\") " pod="kube-system/kube-proxy-7pkp7" Sep 10 00:40:48.378870 systemd[1]: Created slice kubepods-besteffort-pod60436f1d_6435_4ab5_a145_cc5d5645dabf.slice - libcontainer container kubepods-besteffort-pod60436f1d_6435_4ab5_a145_cc5d5645dabf.slice. Sep 10 00:40:48.393397 kubelet[2569]: I0910 00:40:48.393333 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nvv5\" (UniqueName: \"kubernetes.io/projected/60436f1d-6435-4ab5-a145-cc5d5645dabf-kube-api-access-6nvv5\") pod \"tigera-operator-755d956888-v9vmz\" (UID: \"60436f1d-6435-4ab5-a145-cc5d5645dabf\") " pod="tigera-operator/tigera-operator-755d956888-v9vmz" Sep 10 00:40:48.393397 kubelet[2569]: I0910 00:40:48.393385 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/60436f1d-6435-4ab5-a145-cc5d5645dabf-var-lib-calico\") pod \"tigera-operator-755d956888-v9vmz\" (UID: \"60436f1d-6435-4ab5-a145-cc5d5645dabf\") " pod="tigera-operator/tigera-operator-755d956888-v9vmz" Sep 10 00:40:48.570906 kubelet[2569]: E0910 00:40:48.570866 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:48.571668 containerd[1478]: time="2025-09-10T00:40:48.571616366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7pkp7,Uid:f76c361f-4203-47c2-9af7-7e9e07941a48,Namespace:kube-system,Attempt:0,}" Sep 10 00:40:48.604085 containerd[1478]: time="2025-09-10T00:40:48.603676356Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:40:48.604085 containerd[1478]: time="2025-09-10T00:40:48.603825985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:40:48.604085 containerd[1478]: time="2025-09-10T00:40:48.603839916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:40:48.604085 containerd[1478]: time="2025-09-10T00:40:48.603953227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:40:48.631537 systemd[1]: Started cri-containerd-0593c13c90defb1bc285fd98208785498796dc15119b80bd1b3d0230ecd0a3e1.scope - libcontainer container 0593c13c90defb1bc285fd98208785498796dc15119b80bd1b3d0230ecd0a3e1. Sep 10 00:40:48.659544 containerd[1478]: time="2025-09-10T00:40:48.659458387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7pkp7,Uid:f76c361f-4203-47c2-9af7-7e9e07941a48,Namespace:kube-system,Attempt:0,} returns sandbox id \"0593c13c90defb1bc285fd98208785498796dc15119b80bd1b3d0230ecd0a3e1\"" Sep 10 00:40:48.660437 kubelet[2569]: E0910 00:40:48.660398 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:48.683031 containerd[1478]: time="2025-09-10T00:40:48.682963564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-v9vmz,Uid:60436f1d-6435-4ab5-a145-cc5d5645dabf,Namespace:tigera-operator,Attempt:0,}" Sep 10 00:40:48.710300 containerd[1478]: time="2025-09-10T00:40:48.710270508Z" level=info msg="CreateContainer within sandbox \"0593c13c90defb1bc285fd98208785498796dc15119b80bd1b3d0230ecd0a3e1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:40:48.749992 containerd[1478]: time="2025-09-10T00:40:48.749920120Z" level=info msg="CreateContainer within sandbox \"0593c13c90defb1bc285fd98208785498796dc15119b80bd1b3d0230ecd0a3e1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b2224941e0fe0194e81ab9ec8b227ed46174107de36c313b0658c502dc7c72a\"" Sep 10 00:40:48.752660 containerd[1478]: time="2025-09-10T00:40:48.750904271Z" level=info msg="StartContainer for \"0b2224941e0fe0194e81ab9ec8b227ed46174107de36c313b0658c502dc7c72a\"" Sep 10 00:40:48.753600 containerd[1478]: time="2025-09-10T00:40:48.753393595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:40:48.754580 containerd[1478]: time="2025-09-10T00:40:48.754422882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:40:48.754916 containerd[1478]: time="2025-09-10T00:40:48.754789965Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:40:48.755077 containerd[1478]: time="2025-09-10T00:40:48.755032643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:40:48.777465 systemd[1]: Started cri-containerd-c0bb315c1c1857ece214eb58707070bf4f59b70ad77544b69ed309cc209b9ed4.scope - libcontainer container c0bb315c1c1857ece214eb58707070bf4f59b70ad77544b69ed309cc209b9ed4. Sep 10 00:40:48.781534 systemd[1]: Started cri-containerd-0b2224941e0fe0194e81ab9ec8b227ed46174107de36c313b0658c502dc7c72a.scope - libcontainer container 0b2224941e0fe0194e81ab9ec8b227ed46174107de36c313b0658c502dc7c72a. Sep 10 00:40:48.827092 containerd[1478]: time="2025-09-10T00:40:48.826819220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-v9vmz,Uid:60436f1d-6435-4ab5-a145-cc5d5645dabf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c0bb315c1c1857ece214eb58707070bf4f59b70ad77544b69ed309cc209b9ed4\"" Sep 10 00:40:48.829540 containerd[1478]: time="2025-09-10T00:40:48.829479419Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 00:40:48.837679 containerd[1478]: time="2025-09-10T00:40:48.837553826Z" level=info msg="StartContainer for \"0b2224941e0fe0194e81ab9ec8b227ed46174107de36c313b0658c502dc7c72a\" returns successfully" Sep 10 00:40:48.898462 kubelet[2569]: E0910 00:40:48.898382 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:49.520010 kubelet[2569]: E0910 00:40:49.519898 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:49.534133 kubelet[2569]: I0910 00:40:49.534037 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7pkp7" podStartSLOduration=1.534020481 podStartE2EDuration="1.534020481s" podCreationTimestamp="2025-09-10 00:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:40:48.910532127 +0000 UTC m=+6.175966109" watchObservedRunningTime="2025-09-10 00:40:49.534020481 +0000 UTC m=+6.799454443" Sep 10 00:40:49.900429 kubelet[2569]: E0910 00:40:49.900385 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:50.600312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895063878.mount: Deactivated successfully. Sep 10 00:40:50.997825 containerd[1478]: time="2025-09-10T00:40:50.997662292Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:50.998791 containerd[1478]: time="2025-09-10T00:40:50.998695877Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 10 00:40:50.999725 containerd[1478]: time="2025-09-10T00:40:50.999686723Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:51.001981 containerd[1478]: time="2025-09-10T00:40:51.001941836Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:40:51.002653 containerd[1478]: time="2025-09-10T00:40:51.002617900Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.17309125s" Sep 10 00:40:51.002653 containerd[1478]: time="2025-09-10T00:40:51.002650790Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 10 00:40:51.007693 containerd[1478]: time="2025-09-10T00:40:51.007639376Z" level=info msg="CreateContainer within sandbox \"c0bb315c1c1857ece214eb58707070bf4f59b70ad77544b69ed309cc209b9ed4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 00:40:51.021135 containerd[1478]: time="2025-09-10T00:40:51.021090614Z" level=info msg="CreateContainer within sandbox \"c0bb315c1c1857ece214eb58707070bf4f59b70ad77544b69ed309cc209b9ed4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5da7053da00b90c3252a41e9780570605ca748264a16f4f922b96c436f9663b8\"" Sep 10 00:40:51.022228 containerd[1478]: time="2025-09-10T00:40:51.021550914Z" level=info msg="StartContainer for \"5da7053da00b90c3252a41e9780570605ca748264a16f4f922b96c436f9663b8\"" Sep 10 00:40:51.052369 systemd[1]: Started cri-containerd-5da7053da00b90c3252a41e9780570605ca748264a16f4f922b96c436f9663b8.scope - libcontainer container 5da7053da00b90c3252a41e9780570605ca748264a16f4f922b96c436f9663b8. Sep 10 00:40:51.316217 containerd[1478]: time="2025-09-10T00:40:51.316107251Z" level=info msg="StartContainer for \"5da7053da00b90c3252a41e9780570605ca748264a16f4f922b96c436f9663b8\" returns successfully" Sep 10 00:40:51.804512 kubelet[2569]: E0910 00:40:51.804099 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:51.905717 kubelet[2569]: E0910 00:40:51.905667 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:52.221978 kubelet[2569]: I0910 00:40:52.220902 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-v9vmz" podStartSLOduration=2.046255004 podStartE2EDuration="4.22088267s" podCreationTimestamp="2025-09-10 00:40:48 +0000 UTC" firstStartedPulling="2025-09-10 00:40:48.828789968 +0000 UTC m=+6.094223930" lastFinishedPulling="2025-09-10 00:40:51.003417634 +0000 UTC m=+8.268851596" observedRunningTime="2025-09-10 00:40:52.220419578 +0000 UTC m=+9.485853541" watchObservedRunningTime="2025-09-10 00:40:52.22088267 +0000 UTC m=+9.486316632" Sep 10 00:40:54.586897 kubelet[2569]: E0910 00:40:54.586806 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:40:58.750492 sudo[1663]: pam_unix(sudo:session): session closed for user root Sep 10 00:40:58.753232 sshd[1660]: pam_unix(sshd:session): session closed for user core Sep 10 00:40:58.757907 systemd[1]: sshd@8-10.0.0.90:22-10.0.0.1:33506.service: Deactivated successfully. Sep 10 00:40:58.761126 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:40:58.761583 systemd[1]: session-9.scope: Consumed 5.621s CPU time, 163.0M memory peak, 0B memory swap peak. Sep 10 00:40:58.763171 systemd-logind[1453]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:40:58.765493 systemd-logind[1453]: Removed session 9. Sep 10 00:41:02.102271 systemd[1]: Created slice kubepods-besteffort-pod59da1e5f_c003_4888_8c33_ab9be2d8f37c.slice - libcontainer container kubepods-besteffort-pod59da1e5f_c003_4888_8c33_ab9be2d8f37c.slice. Sep 10 00:41:02.180720 systemd[1]: Created slice kubepods-besteffort-podc7bb28fd_5283_46f2_b3f8_10abc4bf2afa.slice - libcontainer container kubepods-besteffort-podc7bb28fd_5283_46f2_b3f8_10abc4bf2afa.slice. Sep 10 00:41:02.186362 kubelet[2569]: I0910 00:41:02.186227 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59da1e5f-c003-4888-8c33-ab9be2d8f37c-tigera-ca-bundle\") pod \"calico-typha-5799fc464c-2s9bq\" (UID: \"59da1e5f-c003-4888-8c33-ab9be2d8f37c\") " pod="calico-system/calico-typha-5799fc464c-2s9bq" Sep 10 00:41:02.186362 kubelet[2569]: I0910 00:41:02.186270 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cngh\" (UniqueName: \"kubernetes.io/projected/59da1e5f-c003-4888-8c33-ab9be2d8f37c-kube-api-access-6cngh\") pod \"calico-typha-5799fc464c-2s9bq\" (UID: \"59da1e5f-c003-4888-8c33-ab9be2d8f37c\") " pod="calico-system/calico-typha-5799fc464c-2s9bq" Sep 10 00:41:02.186362 kubelet[2569]: I0910 00:41:02.186290 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/59da1e5f-c003-4888-8c33-ab9be2d8f37c-typha-certs\") pod \"calico-typha-5799fc464c-2s9bq\" (UID: \"59da1e5f-c003-4888-8c33-ab9be2d8f37c\") " pod="calico-system/calico-typha-5799fc464c-2s9bq" Sep 10 00:41:02.287571 kubelet[2569]: I0910 00:41:02.287515 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-cni-bin-dir\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.287571 kubelet[2569]: I0910 00:41:02.287559 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-xtables-lock\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.287571 kubelet[2569]: I0910 00:41:02.287588 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-cni-log-dir\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.287823 kubelet[2569]: I0910 00:41:02.287603 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-lib-modules\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.287823 kubelet[2569]: I0910 00:41:02.287617 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-var-lib-calico\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.287823 kubelet[2569]: I0910 00:41:02.287640 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-cni-net-dir\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.287823 kubelet[2569]: I0910 00:41:02.287654 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-node-certs\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.287823 kubelet[2569]: I0910 00:41:02.287678 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-tigera-ca-bundle\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.287946 kubelet[2569]: I0910 00:41:02.287693 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q75fq\" (UniqueName: \"kubernetes.io/projected/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-kube-api-access-q75fq\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.287946 kubelet[2569]: I0910 00:41:02.287714 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-var-run-calico\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.287946 kubelet[2569]: I0910 00:41:02.287744 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-flexvol-driver-host\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.287946 kubelet[2569]: I0910 00:41:02.287791 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c7bb28fd-5283-46f2-b3f8-10abc4bf2afa-policysync\") pod \"calico-node-hdkbr\" (UID: \"c7bb28fd-5283-46f2-b3f8-10abc4bf2afa\") " pod="calico-system/calico-node-hdkbr" Sep 10 00:41:02.299230 kubelet[2569]: E0910 00:41:02.295578 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q4hq" podUID="a49cae08-4a20-4c05-9f35-ae3ac5421522" Sep 10 00:41:02.388649 kubelet[2569]: I0910 00:41:02.388311 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a49cae08-4a20-4c05-9f35-ae3ac5421522-socket-dir\") pod \"csi-node-driver-6q4hq\" (UID: \"a49cae08-4a20-4c05-9f35-ae3ac5421522\") " pod="calico-system/csi-node-driver-6q4hq" Sep 10 00:41:02.388649 kubelet[2569]: I0910 00:41:02.388394 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49qvn\" (UniqueName: \"kubernetes.io/projected/a49cae08-4a20-4c05-9f35-ae3ac5421522-kube-api-access-49qvn\") pod \"csi-node-driver-6q4hq\" (UID: \"a49cae08-4a20-4c05-9f35-ae3ac5421522\") " pod="calico-system/csi-node-driver-6q4hq" Sep 10 00:41:02.388649 kubelet[2569]: I0910 00:41:02.388432 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a49cae08-4a20-4c05-9f35-ae3ac5421522-registration-dir\") pod \"csi-node-driver-6q4hq\" (UID: \"a49cae08-4a20-4c05-9f35-ae3ac5421522\") " pod="calico-system/csi-node-driver-6q4hq" Sep 10 00:41:02.388649 kubelet[2569]: I0910 00:41:02.388505 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a49cae08-4a20-4c05-9f35-ae3ac5421522-varrun\") pod \"csi-node-driver-6q4hq\" (UID: \"a49cae08-4a20-4c05-9f35-ae3ac5421522\") " pod="calico-system/csi-node-driver-6q4hq" Sep 10 00:41:02.388649 kubelet[2569]: I0910 00:41:02.388560 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a49cae08-4a20-4c05-9f35-ae3ac5421522-kubelet-dir\") pod \"csi-node-driver-6q4hq\" (UID: \"a49cae08-4a20-4c05-9f35-ae3ac5421522\") " pod="calico-system/csi-node-driver-6q4hq" Sep 10 00:41:02.393661 kubelet[2569]: E0910 00:41:02.393610 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.393661 kubelet[2569]: W0910 00:41:02.393636 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.396566 kubelet[2569]: E0910 00:41:02.396496 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.398230 kubelet[2569]: E0910 00:41:02.396800 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.398230 kubelet[2569]: W0910 00:41:02.396816 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.398230 kubelet[2569]: E0910 00:41:02.396830 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.398230 kubelet[2569]: E0910 00:41:02.397293 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.398230 kubelet[2569]: W0910 00:41:02.397328 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.398230 kubelet[2569]: E0910 00:41:02.397363 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.400393 kubelet[2569]: E0910 00:41:02.400366 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.400450 kubelet[2569]: W0910 00:41:02.400391 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.400450 kubelet[2569]: E0910 00:41:02.400422 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.410599 kubelet[2569]: E0910 00:41:02.410532 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:02.411362 containerd[1478]: time="2025-09-10T00:41:02.411312258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5799fc464c-2s9bq,Uid:59da1e5f-c003-4888-8c33-ab9be2d8f37c,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:02.484710 containerd[1478]: time="2025-09-10T00:41:02.484619089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hdkbr,Uid:c7bb28fd-5283-46f2-b3f8-10abc4bf2afa,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:02.490440 kubelet[2569]: E0910 00:41:02.490099 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.490440 kubelet[2569]: W0910 00:41:02.490127 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.490440 kubelet[2569]: E0910 00:41:02.490152 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.490782 kubelet[2569]: E0910 00:41:02.490578 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.490782 kubelet[2569]: W0910 00:41:02.490605 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.490782 kubelet[2569]: E0910 00:41:02.490634 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.491672 kubelet[2569]: E0910 00:41:02.491648 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.491672 kubelet[2569]: W0910 00:41:02.491672 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.491915 kubelet[2569]: E0910 00:41:02.491685 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.492271 kubelet[2569]: E0910 00:41:02.492239 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.492271 kubelet[2569]: W0910 00:41:02.492258 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.492271 kubelet[2569]: E0910 00:41:02.492274 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.492753 kubelet[2569]: E0910 00:41:02.492735 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.492753 kubelet[2569]: W0910 00:41:02.492751 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.492873 kubelet[2569]: E0910 00:41:02.492827 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.493315 kubelet[2569]: E0910 00:41:02.493263 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.493315 kubelet[2569]: W0910 00:41:02.493297 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.493395 kubelet[2569]: E0910 00:41:02.493328 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.493674 kubelet[2569]: E0910 00:41:02.493657 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.493729 kubelet[2569]: W0910 00:41:02.493671 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.493729 kubelet[2569]: E0910 00:41:02.493713 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.494221 kubelet[2569]: E0910 00:41:02.494069 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.494221 kubelet[2569]: W0910 00:41:02.494083 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.494221 kubelet[2569]: E0910 00:41:02.494095 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.495345 kubelet[2569]: E0910 00:41:02.495170 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.495345 kubelet[2569]: W0910 00:41:02.495185 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.495345 kubelet[2569]: E0910 00:41:02.495230 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.495871 kubelet[2569]: E0910 00:41:02.495823 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.495871 kubelet[2569]: W0910 00:41:02.495835 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.496011 kubelet[2569]: E0910 00:41:02.495847 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.496469 kubelet[2569]: E0910 00:41:02.496396 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.496469 kubelet[2569]: W0910 00:41:02.496410 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.496469 kubelet[2569]: E0910 00:41:02.496423 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.497275 kubelet[2569]: E0910 00:41:02.497043 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.497275 kubelet[2569]: W0910 00:41:02.497056 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.497275 kubelet[2569]: E0910 00:41:02.497131 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.497838 kubelet[2569]: E0910 00:41:02.497618 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.497838 kubelet[2569]: W0910 00:41:02.497630 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.497838 kubelet[2569]: E0910 00:41:02.497641 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.498125 kubelet[2569]: E0910 00:41:02.498005 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.498125 kubelet[2569]: W0910 00:41:02.498018 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.498125 kubelet[2569]: E0910 00:41:02.498028 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.498325 kubelet[2569]: E0910 00:41:02.498284 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.498325 kubelet[2569]: W0910 00:41:02.498296 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.498325 kubelet[2569]: E0910 00:41:02.498306 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.498585 kubelet[2569]: E0910 00:41:02.498510 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.498585 kubelet[2569]: W0910 00:41:02.498520 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.498585 kubelet[2569]: E0910 00:41:02.498528 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.498866 kubelet[2569]: E0910 00:41:02.498844 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.498866 kubelet[2569]: W0910 00:41:02.498858 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.498959 kubelet[2569]: E0910 00:41:02.498874 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.499658 kubelet[2569]: E0910 00:41:02.499627 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.499658 kubelet[2569]: W0910 00:41:02.499645 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.499658 kubelet[2569]: E0910 00:41:02.499658 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.499986 kubelet[2569]: E0910 00:41:02.499955 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.499986 kubelet[2569]: W0910 00:41:02.499973 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.499986 kubelet[2569]: E0910 00:41:02.499985 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.500442 kubelet[2569]: E0910 00:41:02.500388 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.500442 kubelet[2569]: W0910 00:41:02.500425 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.500566 kubelet[2569]: E0910 00:41:02.500461 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.500874 kubelet[2569]: E0910 00:41:02.500842 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.500874 kubelet[2569]: W0910 00:41:02.500858 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.500874 kubelet[2569]: E0910 00:41:02.500870 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.501141 kubelet[2569]: E0910 00:41:02.501112 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.501141 kubelet[2569]: W0910 00:41:02.501126 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.501141 kubelet[2569]: E0910 00:41:02.501138 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.501431 kubelet[2569]: E0910 00:41:02.501398 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.501431 kubelet[2569]: W0910 00:41:02.501413 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.501431 kubelet[2569]: E0910 00:41:02.501425 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.510469 kubelet[2569]: E0910 00:41:02.501731 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.510469 kubelet[2569]: W0910 00:41:02.501743 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.510469 kubelet[2569]: E0910 00:41:02.501755 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.510469 kubelet[2569]: E0910 00:41:02.502024 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.510469 kubelet[2569]: W0910 00:41:02.502038 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.510469 kubelet[2569]: E0910 00:41:02.502053 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.558468 kubelet[2569]: E0910 00:41:02.558247 2569 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 00:41:02.558468 kubelet[2569]: W0910 00:41:02.558291 2569 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 00:41:02.558468 kubelet[2569]: E0910 00:41:02.558329 2569 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 00:41:02.612453 containerd[1478]: time="2025-09-10T00:41:02.611839184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:02.612453 containerd[1478]: time="2025-09-10T00:41:02.611994049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:02.612453 containerd[1478]: time="2025-09-10T00:41:02.612023500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:02.612453 containerd[1478]: time="2025-09-10T00:41:02.612163245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:02.631914 containerd[1478]: time="2025-09-10T00:41:02.631696445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:02.632259 containerd[1478]: time="2025-09-10T00:41:02.632117454Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:02.632737 containerd[1478]: time="2025-09-10T00:41:02.632672224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:02.633959 containerd[1478]: time="2025-09-10T00:41:02.633906179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:02.643497 systemd[1]: Started cri-containerd-3882d5729cbc9612b1360be94075b468bc72eedd7095f2a041da29b864850f20.scope - libcontainer container 3882d5729cbc9612b1360be94075b468bc72eedd7095f2a041da29b864850f20. Sep 10 00:41:02.686758 systemd[1]: Started cri-containerd-33d9e482f194ba06710fa54bc76c937a1a1c676dbe96fe1fa89a172911c748b2.scope - libcontainer container 33d9e482f194ba06710fa54bc76c937a1a1c676dbe96fe1fa89a172911c748b2. Sep 10 00:41:02.724783 containerd[1478]: time="2025-09-10T00:41:02.724737036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hdkbr,Uid:c7bb28fd-5283-46f2-b3f8-10abc4bf2afa,Namespace:calico-system,Attempt:0,} returns sandbox id \"33d9e482f194ba06710fa54bc76c937a1a1c676dbe96fe1fa89a172911c748b2\"" Sep 10 00:41:02.730429 containerd[1478]: time="2025-09-10T00:41:02.730388061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 00:41:02.735849 containerd[1478]: time="2025-09-10T00:41:02.735801311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5799fc464c-2s9bq,Uid:59da1e5f-c003-4888-8c33-ab9be2d8f37c,Namespace:calico-system,Attempt:0,} returns sandbox id \"3882d5729cbc9612b1360be94075b468bc72eedd7095f2a041da29b864850f20\"" Sep 10 00:41:02.736892 kubelet[2569]: E0910 00:41:02.736866 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:03.875512 kubelet[2569]: E0910 00:41:03.875435 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q4hq" podUID="a49cae08-4a20-4c05-9f35-ae3ac5421522" Sep 10 00:41:04.511258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount144034061.mount: Deactivated successfully. Sep 10 00:41:04.584422 containerd[1478]: time="2025-09-10T00:41:04.584368141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:04.585278 containerd[1478]: time="2025-09-10T00:41:04.585218366Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 10 00:41:04.586662 containerd[1478]: time="2025-09-10T00:41:04.586614961Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:04.588770 containerd[1478]: time="2025-09-10T00:41:04.588732520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:04.589376 containerd[1478]: time="2025-09-10T00:41:04.589335233Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.858911448s" Sep 10 00:41:04.589376 containerd[1478]: time="2025-09-10T00:41:04.589367308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 10 00:41:04.593698 containerd[1478]: time="2025-09-10T00:41:04.593493562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 00:41:04.596857 containerd[1478]: time="2025-09-10T00:41:04.596801225Z" level=info msg="CreateContainer within sandbox \"33d9e482f194ba06710fa54bc76c937a1a1c676dbe96fe1fa89a172911c748b2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 00:41:04.614777 containerd[1478]: time="2025-09-10T00:41:04.614722949Z" level=info msg="CreateContainer within sandbox \"33d9e482f194ba06710fa54bc76c937a1a1c676dbe96fe1fa89a172911c748b2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"88ade4feab0fae24d8f4d49133afbec0767a2d09aa9f79ac5d071a5714b56110\"" Sep 10 00:41:04.615511 containerd[1478]: time="2025-09-10T00:41:04.615469644Z" level=info msg="StartContainer for \"88ade4feab0fae24d8f4d49133afbec0767a2d09aa9f79ac5d071a5714b56110\"" Sep 10 00:41:04.650358 systemd[1]: Started cri-containerd-88ade4feab0fae24d8f4d49133afbec0767a2d09aa9f79ac5d071a5714b56110.scope - libcontainer container 88ade4feab0fae24d8f4d49133afbec0767a2d09aa9f79ac5d071a5714b56110. Sep 10 00:41:04.692020 containerd[1478]: time="2025-09-10T00:41:04.691962735Z" level=info msg="StartContainer for \"88ade4feab0fae24d8f4d49133afbec0767a2d09aa9f79ac5d071a5714b56110\" returns successfully" Sep 10 00:41:04.703949 systemd[1]: cri-containerd-88ade4feab0fae24d8f4d49133afbec0767a2d09aa9f79ac5d071a5714b56110.scope: Deactivated successfully. Sep 10 00:41:05.167360 containerd[1478]: time="2025-09-10T00:41:05.167255889Z" level=info msg="shim disconnected" id=88ade4feab0fae24d8f4d49133afbec0767a2d09aa9f79ac5d071a5714b56110 namespace=k8s.io Sep 10 00:41:05.167360 containerd[1478]: time="2025-09-10T00:41:05.167342043Z" level=warning msg="cleaning up after shim disconnected" id=88ade4feab0fae24d8f4d49133afbec0767a2d09aa9f79ac5d071a5714b56110 namespace=k8s.io Sep 10 00:41:05.167360 containerd[1478]: time="2025-09-10T00:41:05.167356042Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:41:05.488700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88ade4feab0fae24d8f4d49133afbec0767a2d09aa9f79ac5d071a5714b56110-rootfs.mount: Deactivated successfully. Sep 10 00:41:05.875593 kubelet[2569]: E0910 00:41:05.875517 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q4hq" podUID="a49cae08-4a20-4c05-9f35-ae3ac5421522" Sep 10 00:41:07.876157 kubelet[2569]: E0910 00:41:07.876075 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q4hq" podUID="a49cae08-4a20-4c05-9f35-ae3ac5421522" Sep 10 00:41:07.886893 containerd[1478]: time="2025-09-10T00:41:07.886837253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:07.887830 containerd[1478]: time="2025-09-10T00:41:07.887771088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 10 00:41:07.889061 containerd[1478]: time="2025-09-10T00:41:07.889018908Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:07.892044 containerd[1478]: time="2025-09-10T00:41:07.891952383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:07.892719 containerd[1478]: time="2025-09-10T00:41:07.892677316Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.299148914s" Sep 10 00:41:07.892719 containerd[1478]: time="2025-09-10T00:41:07.892706766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 10 00:41:07.893645 containerd[1478]: time="2025-09-10T00:41:07.893603126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 00:41:07.912498 containerd[1478]: time="2025-09-10T00:41:07.912423892Z" level=info msg="CreateContainer within sandbox \"3882d5729cbc9612b1360be94075b468bc72eedd7095f2a041da29b864850f20\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 00:41:07.928615 containerd[1478]: time="2025-09-10T00:41:07.928535076Z" level=info msg="CreateContainer within sandbox \"3882d5729cbc9612b1360be94075b468bc72eedd7095f2a041da29b864850f20\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"42fc857fc6dded9b9e2876d3c2cd09c74efd107421671cbfe0654f2885f1b0e7\"" Sep 10 00:41:07.929374 containerd[1478]: time="2025-09-10T00:41:07.929317406Z" level=info msg="StartContainer for \"42fc857fc6dded9b9e2876d3c2cd09c74efd107421671cbfe0654f2885f1b0e7\"" Sep 10 00:41:07.961522 systemd[1]: Started cri-containerd-42fc857fc6dded9b9e2876d3c2cd09c74efd107421671cbfe0654f2885f1b0e7.scope - libcontainer container 42fc857fc6dded9b9e2876d3c2cd09c74efd107421671cbfe0654f2885f1b0e7. Sep 10 00:41:08.010734 containerd[1478]: time="2025-09-10T00:41:08.010675397Z" level=info msg="StartContainer for \"42fc857fc6dded9b9e2876d3c2cd09c74efd107421671cbfe0654f2885f1b0e7\" returns successfully" Sep 10 00:41:08.944659 kubelet[2569]: E0910 00:41:08.944616 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:08.969392 kubelet[2569]: I0910 00:41:08.969301 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5799fc464c-2s9bq" podStartSLOduration=1.8142446410000002 podStartE2EDuration="6.96927446s" podCreationTimestamp="2025-09-10 00:41:02 +0000 UTC" firstStartedPulling="2025-09-10 00:41:02.738425378 +0000 UTC m=+20.003859340" lastFinishedPulling="2025-09-10 00:41:07.893455197 +0000 UTC m=+25.158889159" observedRunningTime="2025-09-10 00:41:08.959091056 +0000 UTC m=+26.224525018" watchObservedRunningTime="2025-09-10 00:41:08.96927446 +0000 UTC m=+26.234708422" Sep 10 00:41:09.875694 kubelet[2569]: E0910 00:41:09.875613 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q4hq" podUID="a49cae08-4a20-4c05-9f35-ae3ac5421522" Sep 10 00:41:09.946800 kubelet[2569]: E0910 00:41:09.946704 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:10.948577 kubelet[2569]: E0910 00:41:10.948528 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:11.432350 containerd[1478]: time="2025-09-10T00:41:11.432234748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:11.433224 containerd[1478]: time="2025-09-10T00:41:11.433165698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 10 00:41:11.434463 containerd[1478]: time="2025-09-10T00:41:11.434427472Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:11.437946 containerd[1478]: time="2025-09-10T00:41:11.437902542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:11.438856 containerd[1478]: time="2025-09-10T00:41:11.438814413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.545181066s" Sep 10 00:41:11.438924 containerd[1478]: time="2025-09-10T00:41:11.438860997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 10 00:41:11.444033 containerd[1478]: time="2025-09-10T00:41:11.443993495Z" level=info msg="CreateContainer within sandbox \"33d9e482f194ba06710fa54bc76c937a1a1c676dbe96fe1fa89a172911c748b2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 00:41:11.460558 containerd[1478]: time="2025-09-10T00:41:11.460518325Z" level=info msg="CreateContainer within sandbox \"33d9e482f194ba06710fa54bc76c937a1a1c676dbe96fe1fa89a172911c748b2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"625b92af7b210bbe04c79ca5d15ec53744f858ffc869812e03dcd4bfbd5051ab\"" Sep 10 00:41:11.461052 containerd[1478]: time="2025-09-10T00:41:11.460992287Z" level=info msg="StartContainer for \"625b92af7b210bbe04c79ca5d15ec53744f858ffc869812e03dcd4bfbd5051ab\"" Sep 10 00:41:11.503583 systemd[1]: Started cri-containerd-625b92af7b210bbe04c79ca5d15ec53744f858ffc869812e03dcd4bfbd5051ab.scope - libcontainer container 625b92af7b210bbe04c79ca5d15ec53744f858ffc869812e03dcd4bfbd5051ab. Sep 10 00:41:11.543598 containerd[1478]: time="2025-09-10T00:41:11.543548158Z" level=info msg="StartContainer for \"625b92af7b210bbe04c79ca5d15ec53744f858ffc869812e03dcd4bfbd5051ab\" returns successfully" Sep 10 00:41:12.377802 kubelet[2569]: E0910 00:41:12.377740 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q4hq" podUID="a49cae08-4a20-4c05-9f35-ae3ac5421522" Sep 10 00:41:14.095122 kubelet[2569]: E0910 00:41:14.095044 2569 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.22s" Sep 10 00:41:14.098416 kubelet[2569]: E0910 00:41:14.097507 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6q4hq" podUID="a49cae08-4a20-4c05-9f35-ae3ac5421522" Sep 10 00:41:14.912775 systemd[1]: cri-containerd-625b92af7b210bbe04c79ca5d15ec53744f858ffc869812e03dcd4bfbd5051ab.scope: Deactivated successfully. Sep 10 00:41:14.938436 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-625b92af7b210bbe04c79ca5d15ec53744f858ffc869812e03dcd4bfbd5051ab-rootfs.mount: Deactivated successfully. Sep 10 00:41:14.939741 kubelet[2569]: I0910 00:41:14.939705 2569 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 00:41:15.160476 containerd[1478]: time="2025-09-10T00:41:15.159370852Z" level=info msg="shim disconnected" id=625b92af7b210bbe04c79ca5d15ec53744f858ffc869812e03dcd4bfbd5051ab namespace=k8s.io Sep 10 00:41:15.160476 containerd[1478]: time="2025-09-10T00:41:15.159447686Z" level=warning msg="cleaning up after shim disconnected" id=625b92af7b210bbe04c79ca5d15ec53744f858ffc869812e03dcd4bfbd5051ab namespace=k8s.io Sep 10 00:41:15.160476 containerd[1478]: time="2025-09-10T00:41:15.159460551Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:41:15.170693 systemd[1]: Created slice kubepods-burstable-pod39471f10_8655_44e1_b957_a2e56d511c05.slice - libcontainer container kubepods-burstable-pod39471f10_8655_44e1_b957_a2e56d511c05.slice. Sep 10 00:41:15.184103 systemd[1]: Created slice kubepods-besteffort-pod85529152_632b_471b_a89e_05d8b212c595.slice - libcontainer container kubepods-besteffort-pod85529152_632b_471b_a89e_05d8b212c595.slice. Sep 10 00:41:15.197033 systemd[1]: Created slice kubepods-burstable-pod5e8668c2_c5ca_4727_aa07_f9c264cfce9b.slice - libcontainer container kubepods-burstable-pod5e8668c2_c5ca_4727_aa07_f9c264cfce9b.slice. Sep 10 00:41:15.209962 systemd[1]: Created slice kubepods-besteffort-pod3c61dc0e_b865_477a_ab78_34bd76f499d1.slice - libcontainer container kubepods-besteffort-pod3c61dc0e_b865_477a_ab78_34bd76f499d1.slice. Sep 10 00:41:15.220698 systemd[1]: Created slice kubepods-besteffort-pod0742522a_e5f6_4d86_9672_4927d9011444.slice - libcontainer container kubepods-besteffort-pod0742522a_e5f6_4d86_9672_4927d9011444.slice. Sep 10 00:41:15.228903 systemd[1]: Created slice kubepods-besteffort-pod3961797f_cb69_46a6_8831_a00deb4ca0a0.slice - libcontainer container kubepods-besteffort-pod3961797f_cb69_46a6_8831_a00deb4ca0a0.slice. Sep 10 00:41:15.238410 systemd[1]: Created slice kubepods-besteffort-podcc162702_bc71_43f1_9f9b_1556715e5f12.slice - libcontainer container kubepods-besteffort-podcc162702_bc71_43f1_9f9b_1556715e5f12.slice. Sep 10 00:41:15.301370 kubelet[2569]: I0910 00:41:15.301281 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z5vx\" (UniqueName: \"kubernetes.io/projected/5e8668c2-c5ca-4727-aa07-f9c264cfce9b-kube-api-access-4z5vx\") pod \"coredns-674b8bbfcf-b547m\" (UID: \"5e8668c2-c5ca-4727-aa07-f9c264cfce9b\") " pod="kube-system/coredns-674b8bbfcf-b547m" Sep 10 00:41:15.301370 kubelet[2569]: I0910 00:41:15.301343 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0742522a-e5f6-4d86-9672-4927d9011444-calico-apiserver-certs\") pod \"calico-apiserver-66b8fdf8b8-gh75h\" (UID: \"0742522a-e5f6-4d86-9672-4927d9011444\") " pod="calico-apiserver/calico-apiserver-66b8fdf8b8-gh75h" Sep 10 00:41:15.302099 kubelet[2569]: I0910 00:41:15.301418 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3961797f-cb69-46a6-8831-a00deb4ca0a0-calico-apiserver-certs\") pod \"calico-apiserver-66b8fdf8b8-524kf\" (UID: \"3961797f-cb69-46a6-8831-a00deb4ca0a0\") " pod="calico-apiserver/calico-apiserver-66b8fdf8b8-524kf" Sep 10 00:41:15.302099 kubelet[2569]: I0910 00:41:15.301443 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbsrk\" (UniqueName: \"kubernetes.io/projected/cc162702-bc71-43f1-9f9b-1556715e5f12-kube-api-access-bbsrk\") pod \"goldmane-54d579b49d-5swrp\" (UID: \"cc162702-bc71-43f1-9f9b-1556715e5f12\") " pod="calico-system/goldmane-54d579b49d-5swrp" Sep 10 00:41:15.302099 kubelet[2569]: I0910 00:41:15.301474 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zgm\" (UniqueName: \"kubernetes.io/projected/0742522a-e5f6-4d86-9672-4927d9011444-kube-api-access-q9zgm\") pod \"calico-apiserver-66b8fdf8b8-gh75h\" (UID: \"0742522a-e5f6-4d86-9672-4927d9011444\") " pod="calico-apiserver/calico-apiserver-66b8fdf8b8-gh75h" Sep 10 00:41:15.302099 kubelet[2569]: I0910 00:41:15.301552 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkssg\" (UniqueName: \"kubernetes.io/projected/39471f10-8655-44e1-b957-a2e56d511c05-kube-api-access-fkssg\") pod \"coredns-674b8bbfcf-fkc6w\" (UID: \"39471f10-8655-44e1-b957-a2e56d511c05\") " pod="kube-system/coredns-674b8bbfcf-fkc6w" Sep 10 00:41:15.302099 kubelet[2569]: I0910 00:41:15.301630 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e8668c2-c5ca-4727-aa07-f9c264cfce9b-config-volume\") pod \"coredns-674b8bbfcf-b547m\" (UID: \"5e8668c2-c5ca-4727-aa07-f9c264cfce9b\") " pod="kube-system/coredns-674b8bbfcf-b547m" Sep 10 00:41:15.302338 kubelet[2569]: I0910 00:41:15.301667 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2tbd\" (UniqueName: \"kubernetes.io/projected/3961797f-cb69-46a6-8831-a00deb4ca0a0-kube-api-access-d2tbd\") pod \"calico-apiserver-66b8fdf8b8-524kf\" (UID: \"3961797f-cb69-46a6-8831-a00deb4ca0a0\") " pod="calico-apiserver/calico-apiserver-66b8fdf8b8-524kf" Sep 10 00:41:15.302338 kubelet[2569]: I0910 00:41:15.301688 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c61dc0e-b865-477a-ab78-34bd76f499d1-whisker-backend-key-pair\") pod \"whisker-79bff68756-4dck5\" (UID: \"3c61dc0e-b865-477a-ab78-34bd76f499d1\") " pod="calico-system/whisker-79bff68756-4dck5" Sep 10 00:41:15.302338 kubelet[2569]: I0910 00:41:15.301712 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x7wh\" (UniqueName: \"kubernetes.io/projected/3c61dc0e-b865-477a-ab78-34bd76f499d1-kube-api-access-7x7wh\") pod \"whisker-79bff68756-4dck5\" (UID: \"3c61dc0e-b865-477a-ab78-34bd76f499d1\") " pod="calico-system/whisker-79bff68756-4dck5" Sep 10 00:41:15.302338 kubelet[2569]: I0910 00:41:15.301739 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85529152-632b-471b-a89e-05d8b212c595-tigera-ca-bundle\") pod \"calico-kube-controllers-66f64968dc-xxlgr\" (UID: \"85529152-632b-471b-a89e-05d8b212c595\") " pod="calico-system/calico-kube-controllers-66f64968dc-xxlgr" Sep 10 00:41:15.302338 kubelet[2569]: I0910 00:41:15.301758 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc162702-bc71-43f1-9f9b-1556715e5f12-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-5swrp\" (UID: \"cc162702-bc71-43f1-9f9b-1556715e5f12\") " pod="calico-system/goldmane-54d579b49d-5swrp" Sep 10 00:41:15.302517 kubelet[2569]: I0910 00:41:15.301775 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c61dc0e-b865-477a-ab78-34bd76f499d1-whisker-ca-bundle\") pod \"whisker-79bff68756-4dck5\" (UID: \"3c61dc0e-b865-477a-ab78-34bd76f499d1\") " pod="calico-system/whisker-79bff68756-4dck5" Sep 10 00:41:15.302517 kubelet[2569]: I0910 00:41:15.301795 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/39471f10-8655-44e1-b957-a2e56d511c05-config-volume\") pod \"coredns-674b8bbfcf-fkc6w\" (UID: \"39471f10-8655-44e1-b957-a2e56d511c05\") " pod="kube-system/coredns-674b8bbfcf-fkc6w" Sep 10 00:41:15.302517 kubelet[2569]: I0910 00:41:15.301813 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pslrw\" (UniqueName: \"kubernetes.io/projected/85529152-632b-471b-a89e-05d8b212c595-kube-api-access-pslrw\") pod \"calico-kube-controllers-66f64968dc-xxlgr\" (UID: \"85529152-632b-471b-a89e-05d8b212c595\") " pod="calico-system/calico-kube-controllers-66f64968dc-xxlgr" Sep 10 00:41:15.302517 kubelet[2569]: I0910 00:41:15.301836 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cc162702-bc71-43f1-9f9b-1556715e5f12-config\") pod \"goldmane-54d579b49d-5swrp\" (UID: \"cc162702-bc71-43f1-9f9b-1556715e5f12\") " pod="calico-system/goldmane-54d579b49d-5swrp" Sep 10 00:41:15.302517 kubelet[2569]: I0910 00:41:15.301851 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/cc162702-bc71-43f1-9f9b-1556715e5f12-goldmane-key-pair\") pod \"goldmane-54d579b49d-5swrp\" (UID: \"cc162702-bc71-43f1-9f9b-1556715e5f12\") " pod="calico-system/goldmane-54d579b49d-5swrp" Sep 10 00:41:15.478882 kubelet[2569]: E0910 00:41:15.478697 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:15.479972 containerd[1478]: time="2025-09-10T00:41:15.479914995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fkc6w,Uid:39471f10-8655-44e1-b957-a2e56d511c05,Namespace:kube-system,Attempt:0,}" Sep 10 00:41:15.491284 containerd[1478]: time="2025-09-10T00:41:15.491185997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66f64968dc-xxlgr,Uid:85529152-632b-471b-a89e-05d8b212c595,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:15.507359 kubelet[2569]: E0910 00:41:15.506914 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:15.507704 containerd[1478]: time="2025-09-10T00:41:15.507645627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b547m,Uid:5e8668c2-c5ca-4727-aa07-f9c264cfce9b,Namespace:kube-system,Attempt:0,}" Sep 10 00:41:15.517489 containerd[1478]: time="2025-09-10T00:41:15.517410437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bff68756-4dck5,Uid:3c61dc0e-b865-477a-ab78-34bd76f499d1,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:15.531299 containerd[1478]: time="2025-09-10T00:41:15.528442041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b8fdf8b8-gh75h,Uid:0742522a-e5f6-4d86-9672-4927d9011444,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:41:15.535976 containerd[1478]: time="2025-09-10T00:41:15.535934967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b8fdf8b8-524kf,Uid:3961797f-cb69-46a6-8831-a00deb4ca0a0,Namespace:calico-apiserver,Attempt:0,}" Sep 10 00:41:15.544743 containerd[1478]: time="2025-09-10T00:41:15.544703043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5swrp,Uid:cc162702-bc71-43f1-9f9b-1556715e5f12,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:15.620725 containerd[1478]: time="2025-09-10T00:41:15.620327997Z" level=error msg="Failed to destroy network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.621656 containerd[1478]: time="2025-09-10T00:41:15.621571204Z" level=error msg="Failed to destroy network for sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.623993 containerd[1478]: time="2025-09-10T00:41:15.623928939Z" level=error msg="Failed to destroy network for sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.625921 containerd[1478]: time="2025-09-10T00:41:15.625862256Z" level=error msg="encountered an error cleaning up failed sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.625981 containerd[1478]: time="2025-09-10T00:41:15.625947727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fkc6w,Uid:39471f10-8655-44e1-b957-a2e56d511c05,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.626011 containerd[1478]: time="2025-09-10T00:41:15.625864049Z" level=error msg="encountered an error cleaning up failed sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.626104 containerd[1478]: time="2025-09-10T00:41:15.626061384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66f64968dc-xxlgr,Uid:85529152-632b-471b-a89e-05d8b212c595,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.626104 containerd[1478]: time="2025-09-10T00:41:15.625862997Z" level=error msg="encountered an error cleaning up failed sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.626410 containerd[1478]: time="2025-09-10T00:41:15.626128588Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b547m,Uid:5e8668c2-c5ca-4727-aa07-f9c264cfce9b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.626452 kubelet[2569]: E0910 00:41:15.626325 2569 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.626452 kubelet[2569]: E0910 00:41:15.626330 2569 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.626452 kubelet[2569]: E0910 00:41:15.626408 2569 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b547m" Sep 10 00:41:15.626452 kubelet[2569]: E0910 00:41:15.626418 2569 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fkc6w" Sep 10 00:41:15.626614 kubelet[2569]: E0910 00:41:15.626394 2569 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.626614 kubelet[2569]: E0910 00:41:15.626503 2569 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66f64968dc-xxlgr" Sep 10 00:41:15.626614 kubelet[2569]: E0910 00:41:15.626540 2569 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-66f64968dc-xxlgr" Sep 10 00:41:15.626707 kubelet[2569]: E0910 00:41:15.626444 2569 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fkc6w" Sep 10 00:41:15.626707 kubelet[2569]: E0910 00:41:15.626436 2569 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-b547m" Sep 10 00:41:15.626707 kubelet[2569]: E0910 00:41:15.626637 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-66f64968dc-xxlgr_calico-system(85529152-632b-471b-a89e-05d8b212c595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-66f64968dc-xxlgr_calico-system(85529152-632b-471b-a89e-05d8b212c595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66f64968dc-xxlgr" podUID="85529152-632b-471b-a89e-05d8b212c595" Sep 10 00:41:15.626817 kubelet[2569]: E0910 00:41:15.626695 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fkc6w_kube-system(39471f10-8655-44e1-b957-a2e56d511c05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fkc6w_kube-system(39471f10-8655-44e1-b957-a2e56d511c05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fkc6w" podUID="39471f10-8655-44e1-b957-a2e56d511c05" Sep 10 00:41:15.626817 kubelet[2569]: E0910 00:41:15.626736 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-b547m_kube-system(5e8668c2-c5ca-4727-aa07-f9c264cfce9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-b547m_kube-system(5e8668c2-c5ca-4727-aa07-f9c264cfce9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b547m" podUID="5e8668c2-c5ca-4727-aa07-f9c264cfce9b" Sep 10 00:41:15.885503 systemd[1]: Created slice kubepods-besteffort-poda49cae08_4a20_4c05_9f35_ae3ac5421522.slice - libcontainer container kubepods-besteffort-poda49cae08_4a20_4c05_9f35_ae3ac5421522.slice. Sep 10 00:41:15.889645 containerd[1478]: time="2025-09-10T00:41:15.889589422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6q4hq,Uid:a49cae08-4a20-4c05-9f35-ae3ac5421522,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:15.910579 containerd[1478]: time="2025-09-10T00:41:15.910513170Z" level=error msg="Failed to destroy network for sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.910898 containerd[1478]: time="2025-09-10T00:41:15.910871697Z" level=error msg="encountered an error cleaning up failed sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.910951 containerd[1478]: time="2025-09-10T00:41:15.910918862Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b8fdf8b8-gh75h,Uid:0742522a-e5f6-4d86-9672-4927d9011444,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.912489 kubelet[2569]: E0910 00:41:15.911541 2569 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.912489 kubelet[2569]: E0910 00:41:15.912259 2569 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b8fdf8b8-gh75h" Sep 10 00:41:15.912489 kubelet[2569]: E0910 00:41:15.912304 2569 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b8fdf8b8-gh75h" Sep 10 00:41:15.912651 kubelet[2569]: E0910 00:41:15.912360 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b8fdf8b8-gh75h_calico-apiserver(0742522a-e5f6-4d86-9672-4927d9011444)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b8fdf8b8-gh75h_calico-apiserver(0742522a-e5f6-4d86-9672-4927d9011444)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b8fdf8b8-gh75h" podUID="0742522a-e5f6-4d86-9672-4927d9011444" Sep 10 00:41:15.923440 containerd[1478]: time="2025-09-10T00:41:15.923378621Z" level=error msg="Failed to destroy network for sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.925871 containerd[1478]: time="2025-09-10T00:41:15.925606577Z" level=error msg="Failed to destroy network for sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.926419 containerd[1478]: time="2025-09-10T00:41:15.926392259Z" level=error msg="encountered an error cleaning up failed sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.926613 containerd[1478]: time="2025-09-10T00:41:15.926570355Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b8fdf8b8-524kf,Uid:3961797f-cb69-46a6-8831-a00deb4ca0a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.926711 containerd[1478]: time="2025-09-10T00:41:15.926437409Z" level=error msg="encountered an error cleaning up failed sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.926711 containerd[1478]: time="2025-09-10T00:41:15.926670917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5swrp,Uid:cc162702-bc71-43f1-9f9b-1556715e5f12,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.926912 kubelet[2569]: E0910 00:41:15.926860 2569 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.926986 kubelet[2569]: E0910 00:41:15.926923 2569 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5swrp" Sep 10 00:41:15.926986 kubelet[2569]: E0910 00:41:15.926945 2569 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-5swrp" Sep 10 00:41:15.927075 kubelet[2569]: E0910 00:41:15.927005 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-5swrp_calico-system(cc162702-bc71-43f1-9f9b-1556715e5f12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-5swrp_calico-system(cc162702-bc71-43f1-9f9b-1556715e5f12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-5swrp" podUID="cc162702-bc71-43f1-9f9b-1556715e5f12" Sep 10 00:41:15.927346 kubelet[2569]: E0910 00:41:15.927315 2569 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.927346 kubelet[2569]: E0910 00:41:15.927342 2569 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b8fdf8b8-524kf" Sep 10 00:41:15.927616 kubelet[2569]: E0910 00:41:15.927356 2569 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b8fdf8b8-524kf" Sep 10 00:41:15.927616 kubelet[2569]: E0910 00:41:15.927395 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b8fdf8b8-524kf_calico-apiserver(3961797f-cb69-46a6-8831-a00deb4ca0a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b8fdf8b8-524kf_calico-apiserver(3961797f-cb69-46a6-8831-a00deb4ca0a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b8fdf8b8-524kf" podUID="3961797f-cb69-46a6-8831-a00deb4ca0a0" Sep 10 00:41:15.932265 containerd[1478]: time="2025-09-10T00:41:15.932179023Z" level=error msg="Failed to destroy network for sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.932872 containerd[1478]: time="2025-09-10T00:41:15.932828512Z" level=error msg="encountered an error cleaning up failed sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.932936 containerd[1478]: time="2025-09-10T00:41:15.932901939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79bff68756-4dck5,Uid:3c61dc0e-b865-477a-ab78-34bd76f499d1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.933151 kubelet[2569]: E0910 00:41:15.933107 2569 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.933151 kubelet[2569]: E0910 00:41:15.933151 2569 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79bff68756-4dck5" Sep 10 00:41:15.933378 kubelet[2569]: E0910 00:41:15.933173 2569 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-79bff68756-4dck5" Sep 10 00:41:15.933378 kubelet[2569]: E0910 00:41:15.933235 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-79bff68756-4dck5_calico-system(3c61dc0e-b865-477a-ab78-34bd76f499d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-79bff68756-4dck5_calico-system(3c61dc0e-b865-477a-ab78-34bd76f499d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79bff68756-4dck5" podUID="3c61dc0e-b865-477a-ab78-34bd76f499d1" Sep 10 00:41:15.984211 containerd[1478]: time="2025-09-10T00:41:15.984129215Z" level=error msg="Failed to destroy network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.984646 containerd[1478]: time="2025-09-10T00:41:15.984607131Z" level=error msg="encountered an error cleaning up failed sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.984690 containerd[1478]: time="2025-09-10T00:41:15.984661871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6q4hq,Uid:a49cae08-4a20-4c05-9f35-ae3ac5421522,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.985010 kubelet[2569]: E0910 00:41:15.984947 2569 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:15.985174 kubelet[2569]: E0910 00:41:15.985020 2569 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6q4hq" Sep 10 00:41:15.985174 kubelet[2569]: E0910 00:41:15.985049 2569 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6q4hq" Sep 10 00:41:15.985174 kubelet[2569]: E0910 00:41:15.985107 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6q4hq_calico-system(a49cae08-4a20-4c05-9f35-ae3ac5421522)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6q4hq_calico-system(a49cae08-4a20-4c05-9f35-ae3ac5421522)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6q4hq" podUID="a49cae08-4a20-4c05-9f35-ae3ac5421522" Sep 10 00:41:15.989362 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f-shm.mount: Deactivated successfully. Sep 10 00:41:16.102848 kubelet[2569]: I0910 00:41:16.102797 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:16.103808 kubelet[2569]: I0910 00:41:16.103780 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:16.104298 containerd[1478]: time="2025-09-10T00:41:16.104248424Z" level=info msg="StopPodSandbox for \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\"" Sep 10 00:41:16.104925 containerd[1478]: time="2025-09-10T00:41:16.104340968Z" level=info msg="StopPodSandbox for \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\"" Sep 10 00:41:16.106551 kubelet[2569]: I0910 00:41:16.105476 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:16.106651 containerd[1478]: time="2025-09-10T00:41:16.106182068Z" level=info msg="Ensure that sandbox 32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd in task-service has been cleanup successfully" Sep 10 00:41:16.106651 containerd[1478]: time="2025-09-10T00:41:16.106380313Z" level=info msg="StopPodSandbox for \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\"" Sep 10 00:41:16.106651 containerd[1478]: time="2025-09-10T00:41:16.106568339Z" level=info msg="Ensure that sandbox f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2 in task-service has been cleanup successfully" Sep 10 00:41:16.111092 kubelet[2569]: I0910 00:41:16.111047 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:16.115317 containerd[1478]: time="2025-09-10T00:41:16.115245950Z" level=info msg="StopPodSandbox for \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\"" Sep 10 00:41:16.115528 containerd[1478]: time="2025-09-10T00:41:16.115496510Z" level=info msg="Ensure that sandbox 781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40 in task-service has been cleanup successfully" Sep 10 00:41:16.116073 kubelet[2569]: I0910 00:41:16.116027 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:16.116357 containerd[1478]: time="2025-09-10T00:41:16.116303723Z" level=info msg="Ensure that sandbox 7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba in task-service has been cleanup successfully" Sep 10 00:41:16.117097 containerd[1478]: time="2025-09-10T00:41:16.117045996Z" level=info msg="StopPodSandbox for \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\"" Sep 10 00:41:16.117350 containerd[1478]: time="2025-09-10T00:41:16.117262719Z" level=info msg="Ensure that sandbox c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23 in task-service has been cleanup successfully" Sep 10 00:41:16.121992 kubelet[2569]: I0910 00:41:16.121940 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:16.123668 containerd[1478]: time="2025-09-10T00:41:16.123634512Z" level=info msg="StopPodSandbox for \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\"" Sep 10 00:41:16.123874 containerd[1478]: time="2025-09-10T00:41:16.123844581Z" level=info msg="Ensure that sandbox 4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739 in task-service has been cleanup successfully" Sep 10 00:41:16.131261 containerd[1478]: time="2025-09-10T00:41:16.131223156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 00:41:16.131861 kubelet[2569]: I0910 00:41:16.131833 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:16.134294 containerd[1478]: time="2025-09-10T00:41:16.133632781Z" level=info msg="StopPodSandbox for \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\"" Sep 10 00:41:16.136339 containerd[1478]: time="2025-09-10T00:41:16.136291674Z" level=info msg="Ensure that sandbox b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f in task-service has been cleanup successfully" Sep 10 00:41:16.143426 kubelet[2569]: I0910 00:41:16.142283 2569 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:16.143885 containerd[1478]: time="2025-09-10T00:41:16.143856571Z" level=info msg="StopPodSandbox for \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\"" Sep 10 00:41:16.144223 containerd[1478]: time="2025-09-10T00:41:16.144170658Z" level=info msg="Ensure that sandbox 4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9 in task-service has been cleanup successfully" Sep 10 00:41:16.175406 containerd[1478]: time="2025-09-10T00:41:16.175335654Z" level=error msg="StopPodSandbox for \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\" failed" error="failed to destroy network for sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:16.176058 kubelet[2569]: E0910 00:41:16.175715 2569 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:16.176058 kubelet[2569]: E0910 00:41:16.175816 2569 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2"} Sep 10 00:41:16.176058 kubelet[2569]: E0910 00:41:16.175922 2569 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc162702-bc71-43f1-9f9b-1556715e5f12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:16.176058 kubelet[2569]: E0910 00:41:16.175968 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc162702-bc71-43f1-9f9b-1556715e5f12\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-5swrp" podUID="cc162702-bc71-43f1-9f9b-1556715e5f12" Sep 10 00:41:16.197249 containerd[1478]: time="2025-09-10T00:41:16.196999895Z" level=error msg="StopPodSandbox for \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\" failed" error="failed to destroy network for sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:16.197461 kubelet[2569]: E0910 00:41:16.197284 2569 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:16.197461 kubelet[2569]: E0910 00:41:16.197356 2569 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23"} Sep 10 00:41:16.197461 kubelet[2569]: E0910 00:41:16.197394 2569 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"39471f10-8655-44e1-b957-a2e56d511c05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:16.197461 kubelet[2569]: E0910 00:41:16.197420 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"39471f10-8655-44e1-b957-a2e56d511c05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fkc6w" podUID="39471f10-8655-44e1-b957-a2e56d511c05" Sep 10 00:41:16.201091 containerd[1478]: time="2025-09-10T00:41:16.201045759Z" level=error msg="StopPodSandbox for \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\" failed" error="failed to destroy network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:16.201343 kubelet[2569]: E0910 00:41:16.201292 2569 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:16.201409 kubelet[2569]: E0910 00:41:16.201355 2569 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f"} Sep 10 00:41:16.201409 kubelet[2569]: E0910 00:41:16.201384 2569 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a49cae08-4a20-4c05-9f35-ae3ac5421522\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:16.201486 kubelet[2569]: E0910 00:41:16.201406 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a49cae08-4a20-4c05-9f35-ae3ac5421522\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6q4hq" podUID="a49cae08-4a20-4c05-9f35-ae3ac5421522" Sep 10 00:41:16.202560 containerd[1478]: time="2025-09-10T00:41:16.202522921Z" level=error msg="StopPodSandbox for \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\" failed" error="failed to destroy network for sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:16.202674 kubelet[2569]: E0910 00:41:16.202651 2569 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:16.202726 kubelet[2569]: E0910 00:41:16.202679 2569 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba"} Sep 10 00:41:16.202726 kubelet[2569]: E0910 00:41:16.202700 2569 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c61dc0e-b865-477a-ab78-34bd76f499d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:16.202726 kubelet[2569]: E0910 00:41:16.202717 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c61dc0e-b865-477a-ab78-34bd76f499d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-79bff68756-4dck5" podUID="3c61dc0e-b865-477a-ab78-34bd76f499d1" Sep 10 00:41:16.209050 containerd[1478]: time="2025-09-10T00:41:16.208947699Z" level=error msg="StopPodSandbox for \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\" failed" error="failed to destroy network for sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:16.210064 kubelet[2569]: E0910 00:41:16.209360 2569 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:16.210064 kubelet[2569]: E0910 00:41:16.209446 2569 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd"} Sep 10 00:41:16.210064 kubelet[2569]: E0910 00:41:16.209500 2569 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0742522a-e5f6-4d86-9672-4927d9011444\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:16.210064 kubelet[2569]: E0910 00:41:16.209534 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0742522a-e5f6-4d86-9672-4927d9011444\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b8fdf8b8-gh75h" podUID="0742522a-e5f6-4d86-9672-4927d9011444" Sep 10 00:41:16.212250 containerd[1478]: time="2025-09-10T00:41:16.212168476Z" level=error msg="StopPodSandbox for \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\" failed" error="failed to destroy network for sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:16.212838 kubelet[2569]: E0910 00:41:16.212630 2569 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:16.212838 kubelet[2569]: E0910 00:41:16.212701 2569 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40"} Sep 10 00:41:16.212838 kubelet[2569]: E0910 00:41:16.212763 2569 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3961797f-cb69-46a6-8831-a00deb4ca0a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:16.212838 kubelet[2569]: E0910 00:41:16.212799 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3961797f-cb69-46a6-8831-a00deb4ca0a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b8fdf8b8-524kf" podUID="3961797f-cb69-46a6-8831-a00deb4ca0a0" Sep 10 00:41:16.218491 containerd[1478]: time="2025-09-10T00:41:16.218418925Z" level=error msg="StopPodSandbox for \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\" failed" error="failed to destroy network for sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:16.218838 kubelet[2569]: E0910 00:41:16.218786 2569 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:16.218907 kubelet[2569]: E0910 00:41:16.218857 2569 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739"} Sep 10 00:41:16.218963 kubelet[2569]: E0910 00:41:16.218902 2569 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"85529152-632b-471b-a89e-05d8b212c595\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:16.219047 kubelet[2569]: E0910 00:41:16.218977 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"85529152-632b-471b-a89e-05d8b212c595\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-66f64968dc-xxlgr" podUID="85529152-632b-471b-a89e-05d8b212c595" Sep 10 00:41:16.226337 containerd[1478]: time="2025-09-10T00:41:16.226255976Z" level=error msg="StopPodSandbox for \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\" failed" error="failed to destroy network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:16.226621 kubelet[2569]: E0910 00:41:16.226576 2569 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:16.226690 kubelet[2569]: E0910 00:41:16.226636 2569 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9"} Sep 10 00:41:16.226690 kubelet[2569]: E0910 00:41:16.226672 2569 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e8668c2-c5ca-4727-aa07-f9c264cfce9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:16.226782 kubelet[2569]: E0910 00:41:16.226702 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e8668c2-c5ca-4727-aa07-f9c264cfce9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b547m" podUID="5e8668c2-c5ca-4727-aa07-f9c264cfce9b" Sep 10 00:41:23.549417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90190542.mount: Deactivated successfully. Sep 10 00:41:26.773842 containerd[1478]: time="2025-09-10T00:41:26.773735071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:26.776220 containerd[1478]: time="2025-09-10T00:41:26.776131178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 10 00:41:26.779262 containerd[1478]: time="2025-09-10T00:41:26.778132001Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:26.781213 containerd[1478]: time="2025-09-10T00:41:26.781139963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:26.781968 containerd[1478]: time="2025-09-10T00:41:26.781938960Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.650525043s" Sep 10 00:41:26.782031 containerd[1478]: time="2025-09-10T00:41:26.781974190Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 10 00:41:26.799921 containerd[1478]: time="2025-09-10T00:41:26.799870353Z" level=info msg="CreateContainer within sandbox \"33d9e482f194ba06710fa54bc76c937a1a1c676dbe96fe1fa89a172911c748b2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 00:41:26.828165 containerd[1478]: time="2025-09-10T00:41:26.828102067Z" level=info msg="CreateContainer within sandbox \"33d9e482f194ba06710fa54bc76c937a1a1c676dbe96fe1fa89a172911c748b2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4ee2a46c4c3169ce5b3549234b66d81a04ea2de0bbc55a66bc3d7371740218f7\"" Sep 10 00:41:26.829154 containerd[1478]: time="2025-09-10T00:41:26.829126050Z" level=info msg="StartContainer for \"4ee2a46c4c3169ce5b3549234b66d81a04ea2de0bbc55a66bc3d7371740218f7\"" Sep 10 00:41:26.891609 systemd[1]: Started cri-containerd-4ee2a46c4c3169ce5b3549234b66d81a04ea2de0bbc55a66bc3d7371740218f7.scope - libcontainer container 4ee2a46c4c3169ce5b3549234b66d81a04ea2de0bbc55a66bc3d7371740218f7. Sep 10 00:41:26.955387 containerd[1478]: time="2025-09-10T00:41:26.955320531Z" level=info msg="StopPodSandbox for \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\"" Sep 10 00:41:26.955832 containerd[1478]: time="2025-09-10T00:41:26.955803099Z" level=info msg="StopPodSandbox for \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\"" Sep 10 00:41:26.983721 containerd[1478]: time="2025-09-10T00:41:26.983672995Z" level=info msg="StartContainer for \"4ee2a46c4c3169ce5b3549234b66d81a04ea2de0bbc55a66bc3d7371740218f7\" returns successfully" Sep 10 00:41:27.043291 containerd[1478]: time="2025-09-10T00:41:27.043067777Z" level=error msg="StopPodSandbox for \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\" failed" error="failed to destroy network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:27.043600 kubelet[2569]: E0910 00:41:27.043464 2569 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:27.043600 kubelet[2569]: E0910 00:41:27.043544 2569 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9"} Sep 10 00:41:27.043600 kubelet[2569]: E0910 00:41:27.043592 2569 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5e8668c2-c5ca-4727-aa07-f9c264cfce9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:27.044723 kubelet[2569]: E0910 00:41:27.043623 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5e8668c2-c5ca-4727-aa07-f9c264cfce9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-b547m" podUID="5e8668c2-c5ca-4727-aa07-f9c264cfce9b" Sep 10 00:41:27.050236 containerd[1478]: time="2025-09-10T00:41:27.049688426Z" level=error msg="StopPodSandbox for \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\" failed" error="failed to destroy network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 00:41:27.050729 kubelet[2569]: E0910 00:41:27.050672 2569 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:27.050981 kubelet[2569]: E0910 00:41:27.050842 2569 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f"} Sep 10 00:41:27.050981 kubelet[2569]: E0910 00:41:27.050902 2569 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a49cae08-4a20-4c05-9f35-ae3ac5421522\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 10 00:41:27.050981 kubelet[2569]: E0910 00:41:27.050942 2569 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a49cae08-4a20-4c05-9f35-ae3ac5421522\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6q4hq" podUID="a49cae08-4a20-4c05-9f35-ae3ac5421522" Sep 10 00:41:27.152585 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 00:41:27.153636 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 00:41:27.209582 kubelet[2569]: I0910 00:41:27.209503 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hdkbr" podStartSLOduration=1.156300841 podStartE2EDuration="25.20948309s" podCreationTimestamp="2025-09-10 00:41:02 +0000 UTC" firstStartedPulling="2025-09-10 00:41:02.72979406 +0000 UTC m=+19.995228023" lastFinishedPulling="2025-09-10 00:41:26.78297631 +0000 UTC m=+44.048410272" observedRunningTime="2025-09-10 00:41:27.207551877 +0000 UTC m=+44.472985839" watchObservedRunningTime="2025-09-10 00:41:27.20948309 +0000 UTC m=+44.474917052" Sep 10 00:41:27.279966 containerd[1478]: time="2025-09-10T00:41:27.279909721Z" level=info msg="StopPodSandbox for \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\"" Sep 10 00:41:27.356586 systemd[1]: Started sshd@9-10.0.0.90:22-10.0.0.1:41778.service - OpenSSH per-connection server daemon (10.0.0.1:41778). Sep 10 00:41:27.436265 sshd[3850]: Accepted publickey for core from 10.0.0.1 port 41778 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:41:27.438623 sshd[3850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:41:27.447714 systemd-logind[1453]: New session 10 of user core. Sep 10 00:41:27.452437 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.408 [INFO][3829] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.411 [INFO][3829] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" iface="eth0" netns="/var/run/netns/cni-b8db68ca-48ff-05c6-3d60-c06182c5b915" Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.412 [INFO][3829] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" iface="eth0" netns="/var/run/netns/cni-b8db68ca-48ff-05c6-3d60-c06182c5b915" Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.415 [INFO][3829] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" iface="eth0" netns="/var/run/netns/cni-b8db68ca-48ff-05c6-3d60-c06182c5b915" Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.415 [INFO][3829] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.415 [INFO][3829] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.540 [INFO][3860] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" HandleID="k8s-pod-network.7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Workload="localhost-k8s-whisker--79bff68756--4dck5-eth0" Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.541 [INFO][3860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.541 [INFO][3860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.552 [WARNING][3860] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" HandleID="k8s-pod-network.7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Workload="localhost-k8s-whisker--79bff68756--4dck5-eth0" Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.553 [INFO][3860] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" HandleID="k8s-pod-network.7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Workload="localhost-k8s-whisker--79bff68756--4dck5-eth0" Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.554 [INFO][3860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:27.563018 containerd[1478]: 2025-09-10 00:41:27.559 [INFO][3829] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:27.563502 containerd[1478]: time="2025-09-10T00:41:27.563290861Z" level=info msg="TearDown network for sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\" successfully" Sep 10 00:41:27.563502 containerd[1478]: time="2025-09-10T00:41:27.563338444Z" level=info msg="StopPodSandbox for \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\" returns successfully" Sep 10 00:41:27.599335 kubelet[2569]: I0910 00:41:27.599255 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x7wh\" (UniqueName: \"kubernetes.io/projected/3c61dc0e-b865-477a-ab78-34bd76f499d1-kube-api-access-7x7wh\") pod \"3c61dc0e-b865-477a-ab78-34bd76f499d1\" (UID: \"3c61dc0e-b865-477a-ab78-34bd76f499d1\") " Sep 10 00:41:27.599335 kubelet[2569]: I0910 00:41:27.599337 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c61dc0e-b865-477a-ab78-34bd76f499d1-whisker-backend-key-pair\") pod \"3c61dc0e-b865-477a-ab78-34bd76f499d1\" (UID: \"3c61dc0e-b865-477a-ab78-34bd76f499d1\") " Sep 10 00:41:27.599609 kubelet[2569]: I0910 00:41:27.599384 2569 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c61dc0e-b865-477a-ab78-34bd76f499d1-whisker-ca-bundle\") pod \"3c61dc0e-b865-477a-ab78-34bd76f499d1\" (UID: \"3c61dc0e-b865-477a-ab78-34bd76f499d1\") " Sep 10 00:41:27.601431 kubelet[2569]: I0910 00:41:27.601395 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c61dc0e-b865-477a-ab78-34bd76f499d1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3c61dc0e-b865-477a-ab78-34bd76f499d1" (UID: "3c61dc0e-b865-477a-ab78-34bd76f499d1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 00:41:27.606450 kubelet[2569]: I0910 00:41:27.606388 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c61dc0e-b865-477a-ab78-34bd76f499d1-kube-api-access-7x7wh" (OuterVolumeSpecName: "kube-api-access-7x7wh") pod "3c61dc0e-b865-477a-ab78-34bd76f499d1" (UID: "3c61dc0e-b865-477a-ab78-34bd76f499d1"). InnerVolumeSpecName "kube-api-access-7x7wh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:41:27.606643 kubelet[2569]: I0910 00:41:27.606565 2569 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c61dc0e-b865-477a-ab78-34bd76f499d1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3c61dc0e-b865-477a-ab78-34bd76f499d1" (UID: "3c61dc0e-b865-477a-ab78-34bd76f499d1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 00:41:27.629078 sshd[3850]: pam_unix(sshd:session): session closed for user core Sep 10 00:41:27.634629 systemd[1]: sshd@9-10.0.0.90:22-10.0.0.1:41778.service: Deactivated successfully. Sep 10 00:41:27.637737 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:41:27.638727 systemd-logind[1453]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:41:27.640258 systemd-logind[1453]: Removed session 10. Sep 10 00:41:27.700319 kubelet[2569]: I0910 00:41:27.700177 2569 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7x7wh\" (UniqueName: \"kubernetes.io/projected/3c61dc0e-b865-477a-ab78-34bd76f499d1-kube-api-access-7x7wh\") on node \"localhost\" DevicePath \"\"" Sep 10 00:41:27.700319 kubelet[2569]: I0910 00:41:27.700282 2569 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c61dc0e-b865-477a-ab78-34bd76f499d1-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 00:41:27.700319 kubelet[2569]: I0910 00:41:27.700300 2569 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c61dc0e-b865-477a-ab78-34bd76f499d1-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 00:41:27.793333 systemd[1]: run-netns-cni\x2db8db68ca\x2d48ff\x2d05c6\x2d3d60\x2dc06182c5b915.mount: Deactivated successfully. Sep 10 00:41:27.793490 systemd[1]: var-lib-kubelet-pods-3c61dc0e\x2db865\x2d477a\x2dab78\x2d34bd76f499d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7x7wh.mount: Deactivated successfully. Sep 10 00:41:27.793588 systemd[1]: var-lib-kubelet-pods-3c61dc0e\x2db865\x2d477a\x2dab78\x2d34bd76f499d1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 00:41:27.876759 containerd[1478]: time="2025-09-10T00:41:27.876682215Z" level=info msg="StopPodSandbox for \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\"" Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:27.977 [INFO][3910] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:27.977 [INFO][3910] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" iface="eth0" netns="/var/run/netns/cni-442381eb-30ec-92be-6d64-e1315baa0361" Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:27.978 [INFO][3910] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" iface="eth0" netns="/var/run/netns/cni-442381eb-30ec-92be-6d64-e1315baa0361" Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:27.978 [INFO][3910] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" iface="eth0" netns="/var/run/netns/cni-442381eb-30ec-92be-6d64-e1315baa0361" Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:27.978 [INFO][3910] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:27.978 [INFO][3910] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:28.001 [INFO][3919] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" HandleID="k8s-pod-network.4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:28.001 [INFO][3919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:28.001 [INFO][3919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:28.006 [WARNING][3919] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" HandleID="k8s-pod-network.4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:28.007 [INFO][3919] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" HandleID="k8s-pod-network.4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:28.009 [INFO][3919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:28.015408 containerd[1478]: 2025-09-10 00:41:28.012 [INFO][3910] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:28.015888 containerd[1478]: time="2025-09-10T00:41:28.015475556Z" level=info msg="TearDown network for sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\" successfully" Sep 10 00:41:28.015888 containerd[1478]: time="2025-09-10T00:41:28.015513834Z" level=info msg="StopPodSandbox for \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\" returns successfully" Sep 10 00:41:28.016683 containerd[1478]: time="2025-09-10T00:41:28.016518020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66f64968dc-xxlgr,Uid:85529152-632b-471b-a89e-05d8b212c595,Namespace:calico-system,Attempt:1,}" Sep 10 00:41:28.018352 systemd[1]: run-netns-cni\x2d442381eb\x2d30ec\x2d92be\x2d6d64\x2de1315baa0361.mount: Deactivated successfully. Sep 10 00:41:28.178419 systemd[1]: Removed slice kubepods-besteffort-pod3c61dc0e_b865_477a_ab78_34bd76f499d1.slice - libcontainer container kubepods-besteffort-pod3c61dc0e_b865_477a_ab78_34bd76f499d1.slice. Sep 10 00:41:28.879029 containerd[1478]: time="2025-09-10T00:41:28.878394194Z" level=info msg="StopPodSandbox for \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\"" Sep 10 00:41:28.886563 containerd[1478]: time="2025-09-10T00:41:28.885906643Z" level=info msg="StopPodSandbox for \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\"" Sep 10 00:41:28.896265 kubelet[2569]: I0910 00:41:28.896077 2569 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c61dc0e-b865-477a-ab78-34bd76f499d1" path="/var/lib/kubelet/pods/3c61dc0e-b865-477a-ab78-34bd76f499d1/volumes" Sep 10 00:41:28.903754 systemd[1]: Created slice kubepods-besteffort-pod283af504_d1aa_4120_b2f7_78631288b373.slice - libcontainer container kubepods-besteffort-pod283af504_d1aa_4120_b2f7_78631288b373.slice. Sep 10 00:41:28.915214 kubelet[2569]: I0910 00:41:28.912491 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/283af504-d1aa-4120-b2f7-78631288b373-whisker-backend-key-pair\") pod \"whisker-5f7cb9bf59-6278x\" (UID: \"283af504-d1aa-4120-b2f7-78631288b373\") " pod="calico-system/whisker-5f7cb9bf59-6278x" Sep 10 00:41:28.915214 kubelet[2569]: I0910 00:41:28.912534 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fntrs\" (UniqueName: \"kubernetes.io/projected/283af504-d1aa-4120-b2f7-78631288b373-kube-api-access-fntrs\") pod \"whisker-5f7cb9bf59-6278x\" (UID: \"283af504-d1aa-4120-b2f7-78631288b373\") " pod="calico-system/whisker-5f7cb9bf59-6278x" Sep 10 00:41:28.915214 kubelet[2569]: I0910 00:41:28.912581 2569 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/283af504-d1aa-4120-b2f7-78631288b373-whisker-ca-bundle\") pod \"whisker-5f7cb9bf59-6278x\" (UID: \"283af504-d1aa-4120-b2f7-78631288b373\") " pod="calico-system/whisker-5f7cb9bf59-6278x" Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:28.978 [INFO][4065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:28.978 [INFO][4065] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" iface="eth0" netns="/var/run/netns/cni-7a799de4-59d2-ab90-cc89-ef7000c7f98c" Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:28.982 [INFO][4065] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" iface="eth0" netns="/var/run/netns/cni-7a799de4-59d2-ab90-cc89-ef7000c7f98c" Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:28.982 [INFO][4065] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" iface="eth0" netns="/var/run/netns/cni-7a799de4-59d2-ab90-cc89-ef7000c7f98c" Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:28.982 [INFO][4065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:28.982 [INFO][4065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:29.035 [INFO][4088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" HandleID="k8s-pod-network.c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:29.035 [INFO][4088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:29.035 [INFO][4088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:29.052 [WARNING][4088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" HandleID="k8s-pod-network.c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:29.052 [INFO][4088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" HandleID="k8s-pod-network.c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:29.056 [INFO][4088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:29.067815 containerd[1478]: 2025-09-10 00:41:29.060 [INFO][4065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:29.069860 containerd[1478]: time="2025-09-10T00:41:29.069821480Z" level=info msg="TearDown network for sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\" successfully" Sep 10 00:41:29.069977 containerd[1478]: time="2025-09-10T00:41:29.069956134Z" level=info msg="StopPodSandbox for \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\" returns successfully" Sep 10 00:41:29.072085 kubelet[2569]: E0910 00:41:29.071922 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:29.074726 containerd[1478]: time="2025-09-10T00:41:29.073180647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fkc6w,Uid:39471f10-8655-44e1-b957-a2e56d511c05,Namespace:kube-system,Attempt:1,}" Sep 10 00:41:29.076745 systemd[1]: run-netns-cni\x2d7a799de4\x2d59d2\x2dab90\x2dcc89\x2def7000c7f98c.mount: Deactivated successfully. Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.068 [INFO][4045] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.069 [INFO][4045] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" iface="eth0" netns="/var/run/netns/cni-c8685eb3-9065-5f01-d92a-a0630cd956cc" Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.072 [INFO][4045] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" iface="eth0" netns="/var/run/netns/cni-c8685eb3-9065-5f01-d92a-a0630cd956cc" Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.072 [INFO][4045] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" iface="eth0" netns="/var/run/netns/cni-c8685eb3-9065-5f01-d92a-a0630cd956cc" Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.072 [INFO][4045] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.072 [INFO][4045] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.118 [INFO][4105] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" HandleID="k8s-pod-network.781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.120 [INFO][4105] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.120 [INFO][4105] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.127 [WARNING][4105] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" HandleID="k8s-pod-network.781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.127 [INFO][4105] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" HandleID="k8s-pod-network.781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.129 [INFO][4105] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:29.146954 containerd[1478]: 2025-09-10 00:41:29.137 [INFO][4045] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:29.148864 containerd[1478]: time="2025-09-10T00:41:29.148486177Z" level=info msg="TearDown network for sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\" successfully" Sep 10 00:41:29.148864 containerd[1478]: time="2025-09-10T00:41:29.148515600Z" level=info msg="StopPodSandbox for \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\" returns successfully" Sep 10 00:41:29.151127 containerd[1478]: time="2025-09-10T00:41:29.150947046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b8fdf8b8-524kf,Uid:3961797f-cb69-46a6-8831-a00deb4ca0a0,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:41:29.213328 containerd[1478]: time="2025-09-10T00:41:29.213265039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f7cb9bf59-6278x,Uid:283af504-d1aa-4120-b2f7-78631288b373,Namespace:calico-system,Attempt:0,}" Sep 10 00:41:29.264496 systemd-networkd[1405]: calia4606df1c89: Link UP Sep 10 00:41:29.270530 systemd-networkd[1405]: calia4606df1c89: Gained carrier Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.041 [INFO][4050] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.065 [INFO][4050] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0 calico-kube-controllers-66f64968dc- calico-system 85529152-632b-471b-a89e-05d8b212c595 954 0 2025-09-10 00:41:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:66f64968dc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-66f64968dc-xxlgr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia4606df1c89 [] [] }} ContainerID="522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" Namespace="calico-system" Pod="calico-kube-controllers-66f64968dc-xxlgr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.065 [INFO][4050] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" Namespace="calico-system" Pod="calico-kube-controllers-66f64968dc-xxlgr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.134 [INFO][4107] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" HandleID="k8s-pod-network.522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.135 [INFO][4107] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" HandleID="k8s-pod-network.522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026d9c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-66f64968dc-xxlgr", "timestamp":"2025-09-10 00:41:29.134952952 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.135 [INFO][4107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.136 [INFO][4107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.136 [INFO][4107] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.152 [INFO][4107] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" host="localhost" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.201 [INFO][4107] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.223 [INFO][4107] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.226 [INFO][4107] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.230 [INFO][4107] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.230 [INFO][4107] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" host="localhost" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.232 [INFO][4107] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9 Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.236 [INFO][4107] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" host="localhost" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.240 [INFO][4107] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" host="localhost" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.240 [INFO][4107] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" host="localhost" Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.240 [INFO][4107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:29.301482 containerd[1478]: 2025-09-10 00:41:29.240 [INFO][4107] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" HandleID="k8s-pod-network.522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:29.302129 containerd[1478]: 2025-09-10 00:41:29.244 [INFO][4050] cni-plugin/k8s.go 418: Populated endpoint ContainerID="522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" Namespace="calico-system" Pod="calico-kube-controllers-66f64968dc-xxlgr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0", GenerateName:"calico-kube-controllers-66f64968dc-", Namespace:"calico-system", SelfLink:"", UID:"85529152-632b-471b-a89e-05d8b212c595", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66f64968dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-66f64968dc-xxlgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4606df1c89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:29.302129 containerd[1478]: 2025-09-10 00:41:29.244 [INFO][4050] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" Namespace="calico-system" Pod="calico-kube-controllers-66f64968dc-xxlgr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:29.302129 containerd[1478]: 2025-09-10 00:41:29.244 [INFO][4050] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4606df1c89 ContainerID="522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" Namespace="calico-system" Pod="calico-kube-controllers-66f64968dc-xxlgr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:29.302129 containerd[1478]: 2025-09-10 00:41:29.265 [INFO][4050] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" Namespace="calico-system" Pod="calico-kube-controllers-66f64968dc-xxlgr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:29.302129 containerd[1478]: 2025-09-10 00:41:29.273 [INFO][4050] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" Namespace="calico-system" Pod="calico-kube-controllers-66f64968dc-xxlgr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0", GenerateName:"calico-kube-controllers-66f64968dc-", Namespace:"calico-system", SelfLink:"", UID:"85529152-632b-471b-a89e-05d8b212c595", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66f64968dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9", Pod:"calico-kube-controllers-66f64968dc-xxlgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4606df1c89", MAC:"66:c8:05:a1:86:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:29.302129 containerd[1478]: 2025-09-10 00:41:29.289 [INFO][4050] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9" Namespace="calico-system" Pod="calico-kube-controllers-66f64968dc-xxlgr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:29.348920 systemd-networkd[1405]: calie8d11c7396c: Link UP Sep 10 00:41:29.350140 systemd-networkd[1405]: calie8d11c7396c: Gained carrier Sep 10 00:41:29.355285 containerd[1478]: time="2025-09-10T00:41:29.355102097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:29.355444 containerd[1478]: time="2025-09-10T00:41:29.355313110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:29.355444 containerd[1478]: time="2025-09-10T00:41:29.355333005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:29.360859 containerd[1478]: time="2025-09-10T00:41:29.360593890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.175 [INFO][4117] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.192 [INFO][4117] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0 coredns-674b8bbfcf- kube-system 39471f10-8655-44e1-b957-a2e56d511c05 981 0 2025-09-10 00:40:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fkc6w eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie8d11c7396c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fkc6w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fkc6w-" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.192 [INFO][4117] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fkc6w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.279 [INFO][4155] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" HandleID="k8s-pod-network.43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.280 [INFO][4155] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" HandleID="k8s-pod-network.43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324b60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fkc6w", "timestamp":"2025-09-10 00:41:29.279781752 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.280 [INFO][4155] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.280 [INFO][4155] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.280 [INFO][4155] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.295 [INFO][4155] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" host="localhost" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.302 [INFO][4155] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.308 [INFO][4155] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.310 [INFO][4155] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.313 [INFO][4155] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.313 [INFO][4155] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" host="localhost" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.315 [INFO][4155] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.327 [INFO][4155] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" host="localhost" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.339 [INFO][4155] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" host="localhost" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.339 [INFO][4155] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" host="localhost" Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.339 [INFO][4155] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:29.373918 containerd[1478]: 2025-09-10 00:41:29.339 [INFO][4155] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" HandleID="k8s-pod-network.43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:29.374564 containerd[1478]: 2025-09-10 00:41:29.342 [INFO][4117] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fkc6w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"39471f10-8655-44e1-b957-a2e56d511c05", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fkc6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8d11c7396c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:29.374564 containerd[1478]: 2025-09-10 00:41:29.342 [INFO][4117] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fkc6w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:29.374564 containerd[1478]: 2025-09-10 00:41:29.342 [INFO][4117] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8d11c7396c ContainerID="43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fkc6w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:29.374564 containerd[1478]: 2025-09-10 00:41:29.351 [INFO][4117] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fkc6w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:29.374564 containerd[1478]: 2025-09-10 00:41:29.353 [INFO][4117] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fkc6w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"39471f10-8655-44e1-b957-a2e56d511c05", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a", Pod:"coredns-674b8bbfcf-fkc6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8d11c7396c", MAC:"42:91:26:02:d9:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:29.374564 containerd[1478]: 2025-09-10 00:41:29.369 [INFO][4117] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a" Namespace="kube-system" Pod="coredns-674b8bbfcf-fkc6w" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:29.381236 kernel: bpftool[4268]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 10 00:41:29.392664 systemd[1]: Started cri-containerd-522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9.scope - libcontainer container 522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9. Sep 10 00:41:29.408794 containerd[1478]: time="2025-09-10T00:41:29.408642233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:29.408981 containerd[1478]: time="2025-09-10T00:41:29.408953697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:29.409084 containerd[1478]: time="2025-09-10T00:41:29.409060050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:29.415929 containerd[1478]: time="2025-09-10T00:41:29.413735472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:29.440987 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:41:29.445641 systemd-networkd[1405]: cali3cae26fb9f1: Link UP Sep 10 00:41:29.446344 systemd-networkd[1405]: cali3cae26fb9f1: Gained carrier Sep 10 00:41:29.452452 systemd[1]: Started cri-containerd-43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a.scope - libcontainer container 43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a. Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.257 [INFO][4157] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.275 [INFO][4157] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0 calico-apiserver-66b8fdf8b8- calico-apiserver 3961797f-cb69-46a6-8831-a00deb4ca0a0 982 0 2025-09-10 00:40:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66b8fdf8b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66b8fdf8b8-524kf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3cae26fb9f1 [] [] }} ContainerID="41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-524kf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.275 [INFO][4157] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-524kf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.322 [INFO][4202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" HandleID="k8s-pod-network.41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.322 [INFO][4202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" HandleID="k8s-pod-network.41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139b80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66b8fdf8b8-524kf", "timestamp":"2025-09-10 00:41:29.322629821 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.322 [INFO][4202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.340 [INFO][4202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.340 [INFO][4202] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.399 [INFO][4202] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" host="localhost" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.405 [INFO][4202] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.410 [INFO][4202] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.412 [INFO][4202] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.414 [INFO][4202] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.414 [INFO][4202] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" host="localhost" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.416 [INFO][4202] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980 Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.422 [INFO][4202] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" host="localhost" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.428 [INFO][4202] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" host="localhost" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.428 [INFO][4202] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" host="localhost" Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.428 [INFO][4202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:29.471343 containerd[1478]: 2025-09-10 00:41:29.428 [INFO][4202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" HandleID="k8s-pod-network.41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:29.471931 containerd[1478]: 2025-09-10 00:41:29.442 [INFO][4157] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-524kf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0", GenerateName:"calico-apiserver-66b8fdf8b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3961797f-cb69-46a6-8831-a00deb4ca0a0", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b8fdf8b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66b8fdf8b8-524kf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cae26fb9f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:29.471931 containerd[1478]: 2025-09-10 00:41:29.442 [INFO][4157] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-524kf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:29.471931 containerd[1478]: 2025-09-10 00:41:29.442 [INFO][4157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cae26fb9f1 ContainerID="41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-524kf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:29.471931 containerd[1478]: 2025-09-10 00:41:29.444 [INFO][4157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-524kf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:29.471931 containerd[1478]: 2025-09-10 00:41:29.445 [INFO][4157] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-524kf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0", GenerateName:"calico-apiserver-66b8fdf8b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3961797f-cb69-46a6-8831-a00deb4ca0a0", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b8fdf8b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980", Pod:"calico-apiserver-66b8fdf8b8-524kf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cae26fb9f1", MAC:"36:fa:c0:de:88:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:29.471931 containerd[1478]: 2025-09-10 00:41:29.462 [INFO][4157] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-524kf" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:29.473355 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:41:29.492918 containerd[1478]: time="2025-09-10T00:41:29.491169549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-66f64968dc-xxlgr,Uid:85529152-632b-471b-a89e-05d8b212c595,Namespace:calico-system,Attempt:1,} returns sandbox id \"522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9\"" Sep 10 00:41:29.493670 containerd[1478]: time="2025-09-10T00:41:29.493648291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 00:41:29.508737 containerd[1478]: time="2025-09-10T00:41:29.508698295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fkc6w,Uid:39471f10-8655-44e1-b957-a2e56d511c05,Namespace:kube-system,Attempt:1,} returns sandbox id \"43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a\"" Sep 10 00:41:29.509305 kubelet[2569]: E0910 00:41:29.509260 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:29.633354 containerd[1478]: time="2025-09-10T00:41:29.632448215Z" level=info msg="CreateContainer within sandbox \"43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:41:29.659575 containerd[1478]: time="2025-09-10T00:41:29.658940835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:29.659575 containerd[1478]: time="2025-09-10T00:41:29.659047188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:29.659575 containerd[1478]: time="2025-09-10T00:41:29.659103951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:29.664285 containerd[1478]: time="2025-09-10T00:41:29.662213426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:29.677663 containerd[1478]: time="2025-09-10T00:41:29.677540602Z" level=info msg="CreateContainer within sandbox \"43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce30f12ac5797484d76d75532261cea9ddf0f10db46e8a9a7a9a54e76015a219\"" Sep 10 00:41:29.679796 containerd[1478]: time="2025-09-10T00:41:29.678631649Z" level=info msg="StartContainer for \"ce30f12ac5797484d76d75532261cea9ddf0f10db46e8a9a7a9a54e76015a219\"" Sep 10 00:41:29.692912 systemd-networkd[1405]: calicdc2611ccb0: Link UP Sep 10 00:41:29.693635 systemd-networkd[1405]: calicdc2611ccb0: Gained carrier Sep 10 00:41:29.696793 systemd[1]: Started cri-containerd-41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980.scope - libcontainer container 41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980. Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.301 [INFO][4182] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.333 [INFO][4182] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5f7cb9bf59--6278x-eth0 whisker-5f7cb9bf59- calico-system 283af504-d1aa-4120-b2f7-78631288b373 976 0 2025-09-10 00:41:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f7cb9bf59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5f7cb9bf59-6278x eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calicdc2611ccb0 [] [] }} ContainerID="ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" Namespace="calico-system" Pod="whisker-5f7cb9bf59-6278x" WorkloadEndpoint="localhost-k8s-whisker--5f7cb9bf59--6278x-" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.334 [INFO][4182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" Namespace="calico-system" Pod="whisker-5f7cb9bf59-6278x" WorkloadEndpoint="localhost-k8s-whisker--5f7cb9bf59--6278x-eth0" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.398 [INFO][4244] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" HandleID="k8s-pod-network.ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" Workload="localhost-k8s-whisker--5f7cb9bf59--6278x-eth0" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.398 [INFO][4244] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" HandleID="k8s-pod-network.ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" Workload="localhost-k8s-whisker--5f7cb9bf59--6278x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5f7cb9bf59-6278x", "timestamp":"2025-09-10 00:41:29.398116639 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.398 [INFO][4244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.428 [INFO][4244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.428 [INFO][4244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.629 [INFO][4244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" host="localhost" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.643 [INFO][4244] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.650 [INFO][4244] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.652 [INFO][4244] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.656 [INFO][4244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.657 [INFO][4244] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" host="localhost" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.660 [INFO][4244] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.668 [INFO][4244] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" host="localhost" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.677 [INFO][4244] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" host="localhost" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.677 [INFO][4244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" host="localhost" Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.677 [INFO][4244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:29.717181 containerd[1478]: 2025-09-10 00:41:29.677 [INFO][4244] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" HandleID="k8s-pod-network.ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" Workload="localhost-k8s-whisker--5f7cb9bf59--6278x-eth0" Sep 10 00:41:29.718825 containerd[1478]: 2025-09-10 00:41:29.688 [INFO][4182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" Namespace="calico-system" Pod="whisker-5f7cb9bf59-6278x" WorkloadEndpoint="localhost-k8s-whisker--5f7cb9bf59--6278x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f7cb9bf59--6278x-eth0", GenerateName:"whisker-5f7cb9bf59-", Namespace:"calico-system", SelfLink:"", UID:"283af504-d1aa-4120-b2f7-78631288b373", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f7cb9bf59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5f7cb9bf59-6278x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicdc2611ccb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:29.718825 containerd[1478]: 2025-09-10 00:41:29.689 [INFO][4182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" Namespace="calico-system" Pod="whisker-5f7cb9bf59-6278x" WorkloadEndpoint="localhost-k8s-whisker--5f7cb9bf59--6278x-eth0" Sep 10 00:41:29.718825 containerd[1478]: 2025-09-10 00:41:29.689 [INFO][4182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdc2611ccb0 ContainerID="ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" Namespace="calico-system" Pod="whisker-5f7cb9bf59-6278x" WorkloadEndpoint="localhost-k8s-whisker--5f7cb9bf59--6278x-eth0" Sep 10 00:41:29.718825 containerd[1478]: 2025-09-10 00:41:29.693 [INFO][4182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" Namespace="calico-system" Pod="whisker-5f7cb9bf59-6278x" WorkloadEndpoint="localhost-k8s-whisker--5f7cb9bf59--6278x-eth0" Sep 10 00:41:29.718825 containerd[1478]: 2025-09-10 00:41:29.694 [INFO][4182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" Namespace="calico-system" Pod="whisker-5f7cb9bf59-6278x" WorkloadEndpoint="localhost-k8s-whisker--5f7cb9bf59--6278x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f7cb9bf59--6278x-eth0", GenerateName:"whisker-5f7cb9bf59-", Namespace:"calico-system", SelfLink:"", UID:"283af504-d1aa-4120-b2f7-78631288b373", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f7cb9bf59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f", Pod:"whisker-5f7cb9bf59-6278x", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calicdc2611ccb0", MAC:"66:dc:22:e7:37:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:29.718825 containerd[1478]: 2025-09-10 00:41:29.709 [INFO][4182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f" Namespace="calico-system" Pod="whisker-5f7cb9bf59-6278x" WorkloadEndpoint="localhost-k8s-whisker--5f7cb9bf59--6278x-eth0" Sep 10 00:41:29.726500 systemd[1]: Started cri-containerd-ce30f12ac5797484d76d75532261cea9ddf0f10db46e8a9a7a9a54e76015a219.scope - libcontainer container ce30f12ac5797484d76d75532261cea9ddf0f10db46e8a9a7a9a54e76015a219. Sep 10 00:41:29.735404 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:41:29.768383 systemd-networkd[1405]: vxlan.calico: Link UP Sep 10 00:41:29.768399 systemd-networkd[1405]: vxlan.calico: Gained carrier Sep 10 00:41:29.773113 containerd[1478]: time="2025-09-10T00:41:29.772977193Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:29.773664 containerd[1478]: time="2025-09-10T00:41:29.773304195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:29.773664 containerd[1478]: time="2025-09-10T00:41:29.773349717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:29.773664 containerd[1478]: time="2025-09-10T00:41:29.773557774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:29.795930 containerd[1478]: time="2025-09-10T00:41:29.795880869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b8fdf8b8-524kf,Uid:3961797f-cb69-46a6-8831-a00deb4ca0a0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980\"" Sep 10 00:41:29.805560 systemd[1]: Started cri-containerd-ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f.scope - libcontainer container ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f. Sep 10 00:41:29.808087 containerd[1478]: time="2025-09-10T00:41:29.807596509Z" level=info msg="StartContainer for \"ce30f12ac5797484d76d75532261cea9ddf0f10db46e8a9a7a9a54e76015a219\" returns successfully" Sep 10 00:41:29.827025 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:41:29.850661 systemd[1]: run-netns-cni\x2dc8685eb3\x2d9065\x2d5f01\x2dd92a\x2da0630cd956cc.mount: Deactivated successfully. Sep 10 00:41:29.891826 containerd[1478]: time="2025-09-10T00:41:29.890794471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f7cb9bf59-6278x,Uid:283af504-d1aa-4120-b2f7-78631288b373,Namespace:calico-system,Attempt:0,} returns sandbox id \"ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f\"" Sep 10 00:41:30.192834 kubelet[2569]: E0910 00:41:30.192776 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:30.209225 kubelet[2569]: I0910 00:41:30.208685 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fkc6w" podStartSLOduration=42.208660725 podStartE2EDuration="42.208660725s" podCreationTimestamp="2025-09-10 00:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:41:30.208388721 +0000 UTC m=+47.473822693" watchObservedRunningTime="2025-09-10 00:41:30.208660725 +0000 UTC m=+47.474094687" Sep 10 00:41:30.659456 systemd-networkd[1405]: calia4606df1c89: Gained IPv6LL Sep 10 00:41:30.723401 systemd-networkd[1405]: calicdc2611ccb0: Gained IPv6LL Sep 10 00:41:30.876171 containerd[1478]: time="2025-09-10T00:41:30.875941155Z" level=info msg="StopPodSandbox for \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\"" Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.926 [INFO][4558] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.926 [INFO][4558] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" iface="eth0" netns="/var/run/netns/cni-4c2e96e7-fda8-3d72-ee60-fd63e7c9586a" Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.926 [INFO][4558] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" iface="eth0" netns="/var/run/netns/cni-4c2e96e7-fda8-3d72-ee60-fd63e7c9586a" Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.927 [INFO][4558] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" iface="eth0" netns="/var/run/netns/cni-4c2e96e7-fda8-3d72-ee60-fd63e7c9586a" Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.927 [INFO][4558] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.927 [INFO][4558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.952 [INFO][4566] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" HandleID="k8s-pod-network.32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.952 [INFO][4566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.952 [INFO][4566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.959 [WARNING][4566] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" HandleID="k8s-pod-network.32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.959 [INFO][4566] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" HandleID="k8s-pod-network.32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.960 [INFO][4566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:30.967560 containerd[1478]: 2025-09-10 00:41:30.964 [INFO][4558] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:30.968792 containerd[1478]: time="2025-09-10T00:41:30.968749462Z" level=info msg="TearDown network for sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\" successfully" Sep 10 00:41:30.968848 containerd[1478]: time="2025-09-10T00:41:30.968795445Z" level=info msg="StopPodSandbox for \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\" returns successfully" Sep 10 00:41:30.971064 systemd[1]: run-netns-cni\x2d4c2e96e7\x2dfda8\x2d3d72\x2dee60\x2dfd63e7c9586a.mount: Deactivated successfully. Sep 10 00:41:30.971684 containerd[1478]: time="2025-09-10T00:41:30.971641696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b8fdf8b8-gh75h,Uid:0742522a-e5f6-4d86-9672-4927d9011444,Namespace:calico-apiserver,Attempt:1,}" Sep 10 00:41:30.980316 systemd-networkd[1405]: calie8d11c7396c: Gained IPv6LL Sep 10 00:41:31.160969 systemd-networkd[1405]: cali11f2d5435ee: Link UP Sep 10 00:41:31.161498 systemd-networkd[1405]: cali11f2d5435ee: Gained carrier Sep 10 00:41:31.206310 kubelet[2569]: E0910 00:41:31.206258 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.051 [INFO][4574] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0 calico-apiserver-66b8fdf8b8- calico-apiserver 0742522a-e5f6-4d86-9672-4927d9011444 1022 0 2025-09-10 00:40:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66b8fdf8b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66b8fdf8b8-gh75h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali11f2d5435ee [] [] }} ContainerID="421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-gh75h" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.051 [INFO][4574] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-gh75h" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.119 [INFO][4590] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" HandleID="k8s-pod-network.421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.119 [INFO][4590] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" HandleID="k8s-pod-network.421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001bbb90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66b8fdf8b8-gh75h", "timestamp":"2025-09-10 00:41:31.118999471 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.119 [INFO][4590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.119 [INFO][4590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.119 [INFO][4590] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.126 [INFO][4590] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" host="localhost" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.131 [INFO][4590] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.136 [INFO][4590] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.138 [INFO][4590] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.141 [INFO][4590] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.141 [INFO][4590] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" host="localhost" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.142 [INFO][4590] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1 Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.147 [INFO][4590] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" host="localhost" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.152 [INFO][4590] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" host="localhost" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.152 [INFO][4590] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" host="localhost" Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.152 [INFO][4590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:31.332679 containerd[1478]: 2025-09-10 00:41:31.152 [INFO][4590] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" HandleID="k8s-pod-network.421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:31.333844 containerd[1478]: 2025-09-10 00:41:31.158 [INFO][4574] cni-plugin/k8s.go 418: Populated endpoint ContainerID="421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-gh75h" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0", GenerateName:"calico-apiserver-66b8fdf8b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0742522a-e5f6-4d86-9672-4927d9011444", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b8fdf8b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66b8fdf8b8-gh75h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11f2d5435ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:31.333844 containerd[1478]: 2025-09-10 00:41:31.158 [INFO][4574] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-gh75h" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:31.333844 containerd[1478]: 2025-09-10 00:41:31.158 [INFO][4574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali11f2d5435ee ContainerID="421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-gh75h" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:31.333844 containerd[1478]: 2025-09-10 00:41:31.161 [INFO][4574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-gh75h" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:31.333844 containerd[1478]: 2025-09-10 00:41:31.161 [INFO][4574] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-gh75h" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0", GenerateName:"calico-apiserver-66b8fdf8b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0742522a-e5f6-4d86-9672-4927d9011444", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b8fdf8b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1", Pod:"calico-apiserver-66b8fdf8b8-gh75h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11f2d5435ee", MAC:"d6:c9:58:ce:82:9d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:31.333844 containerd[1478]: 2025-09-10 00:41:31.327 [INFO][4574] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1" Namespace="calico-apiserver" Pod="calico-apiserver-66b8fdf8b8-gh75h" WorkloadEndpoint="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:31.363409 systemd-networkd[1405]: cali3cae26fb9f1: Gained IPv6LL Sep 10 00:41:31.466446 containerd[1478]: time="2025-09-10T00:41:31.466319852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:31.466446 containerd[1478]: time="2025-09-10T00:41:31.466390360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:31.466446 containerd[1478]: time="2025-09-10T00:41:31.466404615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:31.468048 containerd[1478]: time="2025-09-10T00:41:31.467592819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:31.491636 systemd-networkd[1405]: vxlan.calico: Gained IPv6LL Sep 10 00:41:31.504485 systemd[1]: Started cri-containerd-421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1.scope - libcontainer container 421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1. Sep 10 00:41:31.523837 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:41:31.559990 containerd[1478]: time="2025-09-10T00:41:31.559930105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b8fdf8b8-gh75h,Uid:0742522a-e5f6-4d86-9672-4927d9011444,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1\"" Sep 10 00:41:31.878350 containerd[1478]: time="2025-09-10T00:41:31.877422751Z" level=info msg="StopPodSandbox for \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\"" Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:31.977 [INFO][4661] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:31.978 [INFO][4661] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" iface="eth0" netns="/var/run/netns/cni-adc60b5a-7a4c-8b69-6280-075d1dbc246c" Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:31.978 [INFO][4661] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" iface="eth0" netns="/var/run/netns/cni-adc60b5a-7a4c-8b69-6280-075d1dbc246c" Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:31.979 [INFO][4661] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" iface="eth0" netns="/var/run/netns/cni-adc60b5a-7a4c-8b69-6280-075d1dbc246c" Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:31.979 [INFO][4661] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:31.979 [INFO][4661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:32.026 [INFO][4670] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" HandleID="k8s-pod-network.f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:32.026 [INFO][4670] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:32.026 [INFO][4670] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:32.034 [WARNING][4670] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" HandleID="k8s-pod-network.f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:32.034 [INFO][4670] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" HandleID="k8s-pod-network.f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:32.036 [INFO][4670] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:32.045799 containerd[1478]: 2025-09-10 00:41:32.041 [INFO][4661] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:32.052051 containerd[1478]: time="2025-09-10T00:41:32.049355491Z" level=info msg="TearDown network for sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\" successfully" Sep 10 00:41:32.052051 containerd[1478]: time="2025-09-10T00:41:32.049418606Z" level=info msg="StopPodSandbox for \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\" returns successfully" Sep 10 00:41:32.052051 containerd[1478]: time="2025-09-10T00:41:32.050705465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5swrp,Uid:cc162702-bc71-43f1-9f9b-1556715e5f12,Namespace:calico-system,Attempt:1,}" Sep 10 00:41:32.052540 systemd[1]: run-netns-cni\x2dadc60b5a\x2d7a4c\x2d8b69\x2d6280\x2d075d1dbc246c.mount: Deactivated successfully. Sep 10 00:41:32.224313 kubelet[2569]: E0910 00:41:32.222845 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:32.645522 systemd[1]: Started sshd@10-10.0.0.90:22-10.0.0.1:34464.service - OpenSSH per-connection server daemon (10.0.0.1:34464). Sep 10 00:41:32.658959 containerd[1478]: time="2025-09-10T00:41:32.658854408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:32.660430 systemd-networkd[1405]: cali4175ae473ed: Link UP Sep 10 00:41:32.660659 systemd-networkd[1405]: cali4175ae473ed: Gained carrier Sep 10 00:41:32.663055 containerd[1478]: time="2025-09-10T00:41:32.662617537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 10 00:41:32.692611 containerd[1478]: time="2025-09-10T00:41:32.692546147Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:32.706015 containerd[1478]: time="2025-09-10T00:41:32.705461598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:32.710690 containerd[1478]: time="2025-09-10T00:41:32.710607360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 3.216836976s" Sep 10 00:41:32.710690 containerd[1478]: time="2025-09-10T00:41:32.710686494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 10 00:41:32.712465 containerd[1478]: time="2025-09-10T00:41:32.712388510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.230 [INFO][4680] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--5swrp-eth0 goldmane-54d579b49d- calico-system cc162702-bc71-43f1-9f9b-1556715e5f12 1034 0 2025-09-10 00:41:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-5swrp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4175ae473ed [] [] }} ContainerID="54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" Namespace="calico-system" Pod="goldmane-54d579b49d-5swrp" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5swrp-" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.233 [INFO][4680] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" Namespace="calico-system" Pod="goldmane-54d579b49d-5swrp" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.350 [INFO][4692] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" HandleID="k8s-pod-network.54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.351 [INFO][4692] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" HandleID="k8s-pod-network.54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139410), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-5swrp", "timestamp":"2025-09-10 00:41:32.350852779 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.351 [INFO][4692] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.351 [INFO][4692] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.351 [INFO][4692] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.410 [INFO][4692] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" host="localhost" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.485 [INFO][4692] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.527 [INFO][4692] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.530 [INFO][4692] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.532 [INFO][4692] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.532 [INFO][4692] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" host="localhost" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.534 [INFO][4692] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.603 [INFO][4692] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" host="localhost" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.651 [INFO][4692] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" host="localhost" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.651 [INFO][4692] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" host="localhost" Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.651 [INFO][4692] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:32.714788 containerd[1478]: 2025-09-10 00:41:32.651 [INFO][4692] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" HandleID="k8s-pod-network.54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:32.715633 containerd[1478]: 2025-09-10 00:41:32.656 [INFO][4680] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" Namespace="calico-system" Pod="goldmane-54d579b49d-5swrp" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--5swrp-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cc162702-bc71-43f1-9f9b-1556715e5f12", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-5swrp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4175ae473ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:32.715633 containerd[1478]: 2025-09-10 00:41:32.656 [INFO][4680] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" Namespace="calico-system" Pod="goldmane-54d579b49d-5swrp" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:32.715633 containerd[1478]: 2025-09-10 00:41:32.656 [INFO][4680] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4175ae473ed ContainerID="54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" Namespace="calico-system" Pod="goldmane-54d579b49d-5swrp" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:32.715633 containerd[1478]: 2025-09-10 00:41:32.662 [INFO][4680] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" Namespace="calico-system" Pod="goldmane-54d579b49d-5swrp" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:32.715633 containerd[1478]: 2025-09-10 00:41:32.665 [INFO][4680] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" Namespace="calico-system" Pod="goldmane-54d579b49d-5swrp" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--5swrp-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cc162702-bc71-43f1-9f9b-1556715e5f12", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f", Pod:"goldmane-54d579b49d-5swrp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4175ae473ed", MAC:"72:7d:86:63:33:c3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:32.715633 containerd[1478]: 2025-09-10 00:41:32.706 [INFO][4680] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f" Namespace="calico-system" Pod="goldmane-54d579b49d-5swrp" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:32.738793 containerd[1478]: time="2025-09-10T00:41:32.738490316Z" level=info msg="CreateContainer within sandbox \"522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 00:41:32.755240 containerd[1478]: time="2025-09-10T00:41:32.754394381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:32.755240 containerd[1478]: time="2025-09-10T00:41:32.754502960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:32.755240 containerd[1478]: time="2025-09-10T00:41:32.754539907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:32.755240 containerd[1478]: time="2025-09-10T00:41:32.754684110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:32.757419 sshd[4701]: Accepted publickey for core from 10.0.0.1 port 34464 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:41:32.761357 sshd[4701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:41:32.767166 containerd[1478]: time="2025-09-10T00:41:32.766745632Z" level=info msg="CreateContainer within sandbox \"522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ccba3767fb2f43f17b214a33990403b25d43afe5172dbdc94ed491aa99cea8e9\"" Sep 10 00:41:32.770110 systemd-logind[1453]: New session 11 of user core. Sep 10 00:41:32.776240 containerd[1478]: time="2025-09-10T00:41:32.775291989Z" level=info msg="StartContainer for \"ccba3767fb2f43f17b214a33990403b25d43afe5172dbdc94ed491aa99cea8e9\"" Sep 10 00:41:32.782422 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 00:41:32.823711 systemd[1]: Started cri-containerd-54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f.scope - libcontainer container 54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f. Sep 10 00:41:32.830413 systemd[1]: Started cri-containerd-ccba3767fb2f43f17b214a33990403b25d43afe5172dbdc94ed491aa99cea8e9.scope - libcontainer container ccba3767fb2f43f17b214a33990403b25d43afe5172dbdc94ed491aa99cea8e9. Sep 10 00:41:32.852455 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:41:32.900417 systemd-networkd[1405]: cali11f2d5435ee: Gained IPv6LL Sep 10 00:41:32.910105 containerd[1478]: time="2025-09-10T00:41:32.910053963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-5swrp,Uid:cc162702-bc71-43f1-9f9b-1556715e5f12,Namespace:calico-system,Attempt:1,} returns sandbox id \"54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f\"" Sep 10 00:41:32.955292 containerd[1478]: time="2025-09-10T00:41:32.953024455Z" level=info msg="StartContainer for \"ccba3767fb2f43f17b214a33990403b25d43afe5172dbdc94ed491aa99cea8e9\" returns successfully" Sep 10 00:41:33.032816 sshd[4701]: pam_unix(sshd:session): session closed for user core Sep 10 00:41:33.056787 systemd[1]: sshd@10-10.0.0.90:22-10.0.0.1:34464.service: Deactivated successfully. Sep 10 00:41:33.059846 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:41:33.061772 systemd-logind[1453]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:41:33.063123 systemd-logind[1453]: Removed session 11. Sep 10 00:41:33.373372 kubelet[2569]: I0910 00:41:33.372992 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-66f64968dc-xxlgr" podStartSLOduration=28.15347563 podStartE2EDuration="31.37296767s" podCreationTimestamp="2025-09-10 00:41:02 +0000 UTC" firstStartedPulling="2025-09-10 00:41:29.492864511 +0000 UTC m=+46.758298473" lastFinishedPulling="2025-09-10 00:41:32.712356541 +0000 UTC m=+49.977790513" observedRunningTime="2025-09-10 00:41:33.372667391 +0000 UTC m=+50.638101373" watchObservedRunningTime="2025-09-10 00:41:33.37296767 +0000 UTC m=+50.638401643" Sep 10 00:41:34.627532 systemd-networkd[1405]: cali4175ae473ed: Gained IPv6LL Sep 10 00:41:36.713957 containerd[1478]: time="2025-09-10T00:41:36.713858644Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:36.715800 containerd[1478]: time="2025-09-10T00:41:36.715749042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 10 00:41:36.717294 containerd[1478]: time="2025-09-10T00:41:36.717253450Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:36.723513 containerd[1478]: time="2025-09-10T00:41:36.723421983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:36.724500 containerd[1478]: time="2025-09-10T00:41:36.724414139Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.01198178s" Sep 10 00:41:36.724500 containerd[1478]: time="2025-09-10T00:41:36.724488295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 00:41:36.725923 containerd[1478]: time="2025-09-10T00:41:36.725881900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 00:41:36.738646 containerd[1478]: time="2025-09-10T00:41:36.738559831Z" level=info msg="CreateContainer within sandbox \"41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:41:36.764439 containerd[1478]: time="2025-09-10T00:41:36.764249759Z" level=info msg="CreateContainer within sandbox \"41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d88cf26550de1c5f7faadc81a97b73e1568e994f969e878743762ea68e868b8e\"" Sep 10 00:41:36.765726 containerd[1478]: time="2025-09-10T00:41:36.765669592Z" level=info msg="StartContainer for \"d88cf26550de1c5f7faadc81a97b73e1568e994f969e878743762ea68e868b8e\"" Sep 10 00:41:36.810557 systemd[1]: Started cri-containerd-d88cf26550de1c5f7faadc81a97b73e1568e994f969e878743762ea68e868b8e.scope - libcontainer container d88cf26550de1c5f7faadc81a97b73e1568e994f969e878743762ea68e868b8e. Sep 10 00:41:36.861972 containerd[1478]: time="2025-09-10T00:41:36.861903873Z" level=info msg="StartContainer for \"d88cf26550de1c5f7faadc81a97b73e1568e994f969e878743762ea68e868b8e\" returns successfully" Sep 10 00:41:37.254230 kubelet[2569]: I0910 00:41:37.252356 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66b8fdf8b8-524kf" podStartSLOduration=31.32664679 podStartE2EDuration="38.252330682s" podCreationTimestamp="2025-09-10 00:40:59 +0000 UTC" firstStartedPulling="2025-09-10 00:41:29.799988953 +0000 UTC m=+47.065422915" lastFinishedPulling="2025-09-10 00:41:36.725672835 +0000 UTC m=+53.991106807" observedRunningTime="2025-09-10 00:41:37.252132006 +0000 UTC m=+54.517565978" watchObservedRunningTime="2025-09-10 00:41:37.252330682 +0000 UTC m=+54.517764644" Sep 10 00:41:38.054630 systemd[1]: Started sshd@11-10.0.0.90:22-10.0.0.1:34476.service - OpenSSH per-connection server daemon (10.0.0.1:34476). Sep 10 00:41:38.483354 sshd[4916]: Accepted publickey for core from 10.0.0.1 port 34476 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:41:38.485461 sshd[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:41:38.490615 systemd-logind[1453]: New session 12 of user core. Sep 10 00:41:38.496474 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 00:41:38.648725 sshd[4916]: pam_unix(sshd:session): session closed for user core Sep 10 00:41:38.652798 systemd[1]: sshd@11-10.0.0.90:22-10.0.0.1:34476.service: Deactivated successfully. Sep 10 00:41:38.655359 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:41:38.657824 systemd-logind[1453]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:41:38.659035 systemd-logind[1453]: Removed session 12. Sep 10 00:41:38.823338 containerd[1478]: time="2025-09-10T00:41:38.823280776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:38.824179 containerd[1478]: time="2025-09-10T00:41:38.824115819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 10 00:41:38.825595 containerd[1478]: time="2025-09-10T00:41:38.825552201Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:38.828599 containerd[1478]: time="2025-09-10T00:41:38.828548386Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:38.829159 containerd[1478]: time="2025-09-10T00:41:38.829135882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 2.103217214s" Sep 10 00:41:38.829217 containerd[1478]: time="2025-09-10T00:41:38.829166609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 10 00:41:38.830243 containerd[1478]: time="2025-09-10T00:41:38.830206770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 00:41:38.834633 containerd[1478]: time="2025-09-10T00:41:38.834596469Z" level=info msg="CreateContainer within sandbox \"ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 00:41:38.849339 containerd[1478]: time="2025-09-10T00:41:38.849296894Z" level=info msg="CreateContainer within sandbox \"ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6ef7cbcca291a57975e715a37ec27a824d7a73d1a46fd938aa1782a685042c73\"" Sep 10 00:41:38.851165 containerd[1478]: time="2025-09-10T00:41:38.850079119Z" level=info msg="StartContainer for \"6ef7cbcca291a57975e715a37ec27a824d7a73d1a46fd938aa1782a685042c73\"" Sep 10 00:41:38.877546 containerd[1478]: time="2025-09-10T00:41:38.877509641Z" level=info msg="StopPodSandbox for \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\"" Sep 10 00:41:38.884360 systemd[1]: Started cri-containerd-6ef7cbcca291a57975e715a37ec27a824d7a73d1a46fd938aa1782a685042c73.scope - libcontainer container 6ef7cbcca291a57975e715a37ec27a824d7a73d1a46fd938aa1782a685042c73. Sep 10 00:41:38.942180 containerd[1478]: time="2025-09-10T00:41:38.941861306Z" level=info msg="StartContainer for \"6ef7cbcca291a57975e715a37ec27a824d7a73d1a46fd938aa1782a685042c73\" returns successfully" Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.940 [INFO][4966] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.941 [INFO][4966] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" iface="eth0" netns="/var/run/netns/cni-09641e86-b44b-4cfa-7d79-37bdad98ec2b" Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.941 [INFO][4966] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" iface="eth0" netns="/var/run/netns/cni-09641e86-b44b-4cfa-7d79-37bdad98ec2b" Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.941 [INFO][4966] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" iface="eth0" netns="/var/run/netns/cni-09641e86-b44b-4cfa-7d79-37bdad98ec2b" Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.941 [INFO][4966] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.941 [INFO][4966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.971 [INFO][4989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" HandleID="k8s-pod-network.4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.971 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.971 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.977 [WARNING][4989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" HandleID="k8s-pod-network.4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.977 [INFO][4989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" HandleID="k8s-pod-network.4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.978 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:38.986914 containerd[1478]: 2025-09-10 00:41:38.982 [INFO][4966] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:38.988149 containerd[1478]: time="2025-09-10T00:41:38.987130530Z" level=info msg="TearDown network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\" successfully" Sep 10 00:41:38.988149 containerd[1478]: time="2025-09-10T00:41:38.987159283Z" level=info msg="StopPodSandbox for \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\" returns successfully" Sep 10 00:41:38.988289 kubelet[2569]: E0910 00:41:38.987690 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:38.989455 containerd[1478]: time="2025-09-10T00:41:38.989419798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b547m,Uid:5e8668c2-c5ca-4727-aa07-f9c264cfce9b,Namespace:kube-system,Attempt:1,}" Sep 10 00:41:38.990599 systemd[1]: run-netns-cni\x2d09641e86\x2db44b\x2d4cfa\x2d7d79\x2d37bdad98ec2b.mount: Deactivated successfully. Sep 10 00:41:39.142539 systemd-networkd[1405]: cali655d4ae250a: Link UP Sep 10 00:41:39.145432 systemd-networkd[1405]: cali655d4ae250a: Gained carrier Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.042 [INFO][5004] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--b547m-eth0 coredns-674b8bbfcf- kube-system 5e8668c2-c5ca-4727-aa07-f9c264cfce9b 1097 0 2025-09-10 00:40:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-b547m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali655d4ae250a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" Namespace="kube-system" Pod="coredns-674b8bbfcf-b547m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b547m-" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.042 [INFO][5004] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" Namespace="kube-system" Pod="coredns-674b8bbfcf-b547m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.072 [INFO][5018] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" HandleID="k8s-pod-network.da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.072 [INFO][5018] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" HandleID="k8s-pod-network.da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f860), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-b547m", "timestamp":"2025-09-10 00:41:39.072503947 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.074 [INFO][5018] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.074 [INFO][5018] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.074 [INFO][5018] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.081 [INFO][5018] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" host="localhost" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.085 [INFO][5018] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.089 [INFO][5018] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.091 [INFO][5018] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.093 [INFO][5018] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.093 [INFO][5018] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" host="localhost" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.095 [INFO][5018] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386 Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.098 [INFO][5018] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" host="localhost" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.111 [INFO][5018] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" host="localhost" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.112 [INFO][5018] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" host="localhost" Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.112 [INFO][5018] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:39.168846 containerd[1478]: 2025-09-10 00:41:39.112 [INFO][5018] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" HandleID="k8s-pod-network.da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:39.169962 containerd[1478]: 2025-09-10 00:41:39.135 [INFO][5004] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" Namespace="kube-system" Pod="coredns-674b8bbfcf-b547m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b547m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5e8668c2-c5ca-4727-aa07-f9c264cfce9b", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-b547m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali655d4ae250a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:39.169962 containerd[1478]: 2025-09-10 00:41:39.136 [INFO][5004] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" Namespace="kube-system" Pod="coredns-674b8bbfcf-b547m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:39.169962 containerd[1478]: 2025-09-10 00:41:39.136 [INFO][5004] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali655d4ae250a ContainerID="da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" Namespace="kube-system" Pod="coredns-674b8bbfcf-b547m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:39.169962 containerd[1478]: 2025-09-10 00:41:39.147 [INFO][5004] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" Namespace="kube-system" Pod="coredns-674b8bbfcf-b547m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:39.169962 containerd[1478]: 2025-09-10 00:41:39.152 [INFO][5004] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" Namespace="kube-system" Pod="coredns-674b8bbfcf-b547m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b547m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5e8668c2-c5ca-4727-aa07-f9c264cfce9b", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386", Pod:"coredns-674b8bbfcf-b547m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali655d4ae250a", MAC:"3e:24:ab:40:11:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:39.169962 containerd[1478]: 2025-09-10 00:41:39.164 [INFO][5004] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386" Namespace="kube-system" Pod="coredns-674b8bbfcf-b547m" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:39.188663 containerd[1478]: time="2025-09-10T00:41:39.188336253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:39.188663 containerd[1478]: time="2025-09-10T00:41:39.188399370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:39.188663 containerd[1478]: time="2025-09-10T00:41:39.188411131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:39.188663 containerd[1478]: time="2025-09-10T00:41:39.188543076Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:39.206551 systemd[1]: Started cri-containerd-da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386.scope - libcontainer container da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386. Sep 10 00:41:39.221768 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:41:39.257607 containerd[1478]: time="2025-09-10T00:41:39.257561530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-b547m,Uid:5e8668c2-c5ca-4727-aa07-f9c264cfce9b,Namespace:kube-system,Attempt:1,} returns sandbox id \"da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386\"" Sep 10 00:41:39.258535 kubelet[2569]: E0910 00:41:39.258499 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:39.495992 containerd[1478]: time="2025-09-10T00:41:39.494819824Z" level=info msg="CreateContainer within sandbox \"da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:41:39.531096 containerd[1478]: time="2025-09-10T00:41:39.531011170Z" level=info msg="CreateContainer within sandbox \"da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"68c4b546f8c18f10f526f4b6a08a074b1f4a53d0db4e5caeb675a796a09198ec\"" Sep 10 00:41:39.531817 containerd[1478]: time="2025-09-10T00:41:39.531769453Z" level=info msg="StartContainer for \"68c4b546f8c18f10f526f4b6a08a074b1f4a53d0db4e5caeb675a796a09198ec\"" Sep 10 00:41:39.540996 containerd[1478]: time="2025-09-10T00:41:39.540917492Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:39.542083 containerd[1478]: time="2025-09-10T00:41:39.542040019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 00:41:39.544639 containerd[1478]: time="2025-09-10T00:41:39.544588025Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 714.34064ms" Sep 10 00:41:39.544639 containerd[1478]: time="2025-09-10T00:41:39.544631355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 10 00:41:39.546673 containerd[1478]: time="2025-09-10T00:41:39.546632889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 00:41:39.563432 systemd[1]: Started cri-containerd-68c4b546f8c18f10f526f4b6a08a074b1f4a53d0db4e5caeb675a796a09198ec.scope - libcontainer container 68c4b546f8c18f10f526f4b6a08a074b1f4a53d0db4e5caeb675a796a09198ec. Sep 10 00:41:39.635772 containerd[1478]: time="2025-09-10T00:41:39.635694184Z" level=info msg="CreateContainer within sandbox \"421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 00:41:39.793217 containerd[1478]: time="2025-09-10T00:41:39.793152452Z" level=info msg="StartContainer for \"68c4b546f8c18f10f526f4b6a08a074b1f4a53d0db4e5caeb675a796a09198ec\" returns successfully" Sep 10 00:41:39.892764 containerd[1478]: time="2025-09-10T00:41:39.892629322Z" level=info msg="CreateContainer within sandbox \"421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5a923098b176bbc32ea183590197de009e8d887fbcf5631b086debcbeb7b77cc\"" Sep 10 00:41:39.893520 containerd[1478]: time="2025-09-10T00:41:39.893475077Z" level=info msg="StartContainer for \"5a923098b176bbc32ea183590197de009e8d887fbcf5631b086debcbeb7b77cc\"" Sep 10 00:41:39.940477 systemd[1]: Started cri-containerd-5a923098b176bbc32ea183590197de009e8d887fbcf5631b086debcbeb7b77cc.scope - libcontainer container 5a923098b176bbc32ea183590197de009e8d887fbcf5631b086debcbeb7b77cc. Sep 10 00:41:40.478066 containerd[1478]: time="2025-09-10T00:41:40.478008833Z" level=info msg="StartContainer for \"5a923098b176bbc32ea183590197de009e8d887fbcf5631b086debcbeb7b77cc\" returns successfully" Sep 10 00:41:40.481290 kubelet[2569]: E0910 00:41:40.481253 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:40.652495 kubelet[2569]: I0910 00:41:40.652410 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-b547m" podStartSLOduration=52.652381983 podStartE2EDuration="52.652381983s" podCreationTimestamp="2025-09-10 00:40:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:41:40.651110726 +0000 UTC m=+57.916544688" watchObservedRunningTime="2025-09-10 00:41:40.652381983 +0000 UTC m=+57.917815945" Sep 10 00:41:40.690087 kubelet[2569]: I0910 00:41:40.689973 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66b8fdf8b8-gh75h" podStartSLOduration=33.706733482 podStartE2EDuration="41.689950516s" podCreationTimestamp="2025-09-10 00:40:59 +0000 UTC" firstStartedPulling="2025-09-10 00:41:31.562326789 +0000 UTC m=+48.827760751" lastFinishedPulling="2025-09-10 00:41:39.545543823 +0000 UTC m=+56.810977785" observedRunningTime="2025-09-10 00:41:40.671334808 +0000 UTC m=+57.936768770" watchObservedRunningTime="2025-09-10 00:41:40.689950516 +0000 UTC m=+57.955384479" Sep 10 00:41:40.876356 containerd[1478]: time="2025-09-10T00:41:40.876288207Z" level=info msg="StopPodSandbox for \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\"" Sep 10 00:41:41.091467 systemd-networkd[1405]: cali655d4ae250a: Gained IPv6LL Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:40.936 [INFO][5178] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:40.936 [INFO][5178] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" iface="eth0" netns="/var/run/netns/cni-f1febd7a-e437-8884-76f4-43b9d6b0927b" Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:40.937 [INFO][5178] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" iface="eth0" netns="/var/run/netns/cni-f1febd7a-e437-8884-76f4-43b9d6b0927b" Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:40.937 [INFO][5178] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" iface="eth0" netns="/var/run/netns/cni-f1febd7a-e437-8884-76f4-43b9d6b0927b" Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:40.938 [INFO][5178] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:40.938 [INFO][5178] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:40.962 [INFO][5187] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" HandleID="k8s-pod-network.b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:40.962 [INFO][5187] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:40.962 [INFO][5187] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:41.083 [WARNING][5187] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" HandleID="k8s-pod-network.b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:41.084 [INFO][5187] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" HandleID="k8s-pod-network.b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:41.088 [INFO][5187] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:41.094621 containerd[1478]: 2025-09-10 00:41:41.091 [INFO][5178] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:41.096545 containerd[1478]: time="2025-09-10T00:41:41.094873569Z" level=info msg="TearDown network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\" successfully" Sep 10 00:41:41.096545 containerd[1478]: time="2025-09-10T00:41:41.094904196Z" level=info msg="StopPodSandbox for \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\" returns successfully" Sep 10 00:41:41.096545 containerd[1478]: time="2025-09-10T00:41:41.096107991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6q4hq,Uid:a49cae08-4a20-4c05-9f35-ae3ac5421522,Namespace:calico-system,Attempt:1,}" Sep 10 00:41:41.098006 systemd[1]: run-netns-cni\x2df1febd7a\x2de437\x2d8884\x2d76f4\x2d43b9d6b0927b.mount: Deactivated successfully. Sep 10 00:41:41.492413 kubelet[2569]: E0910 00:41:41.492373 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:42.451362 systemd-networkd[1405]: cali3b7b0039b4d: Link UP Sep 10 00:41:42.451664 systemd-networkd[1405]: cali3b7b0039b4d: Gained carrier Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.333 [INFO][5195] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6q4hq-eth0 csi-node-driver- calico-system a49cae08-4a20-4c05-9f35-ae3ac5421522 1131 0 2025-09-10 00:41:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6q4hq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3b7b0039b4d [] [] }} ContainerID="79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" Namespace="calico-system" Pod="csi-node-driver-6q4hq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q4hq-" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.333 [INFO][5195] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" Namespace="calico-system" Pod="csi-node-driver-6q4hq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.370 [INFO][5210] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" HandleID="k8s-pod-network.79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.370 [INFO][5210] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" HandleID="k8s-pod-network.79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a2510), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6q4hq", "timestamp":"2025-09-10 00:41:42.370462747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.370 [INFO][5210] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.370 [INFO][5210] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.370 [INFO][5210] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.382 [INFO][5210] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" host="localhost" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.390 [INFO][5210] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.396 [INFO][5210] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.401 [INFO][5210] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.404 [INFO][5210] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.404 [INFO][5210] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" host="localhost" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.407 [INFO][5210] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25 Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.422 [INFO][5210] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" host="localhost" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.438 [INFO][5210] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" host="localhost" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.439 [INFO][5210] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" host="localhost" Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.439 [INFO][5210] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:42.473221 containerd[1478]: 2025-09-10 00:41:42.439 [INFO][5210] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" HandleID="k8s-pod-network.79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:42.474502 containerd[1478]: 2025-09-10 00:41:42.444 [INFO][5195] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" Namespace="calico-system" Pod="csi-node-driver-6q4hq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q4hq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6q4hq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a49cae08-4a20-4c05-9f35-ae3ac5421522", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6q4hq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b7b0039b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:42.474502 containerd[1478]: 2025-09-10 00:41:42.444 [INFO][5195] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" Namespace="calico-system" Pod="csi-node-driver-6q4hq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:42.474502 containerd[1478]: 2025-09-10 00:41:42.444 [INFO][5195] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b7b0039b4d ContainerID="79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" Namespace="calico-system" Pod="csi-node-driver-6q4hq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:42.474502 containerd[1478]: 2025-09-10 00:41:42.449 [INFO][5195] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" Namespace="calico-system" Pod="csi-node-driver-6q4hq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:42.474502 containerd[1478]: 2025-09-10 00:41:42.452 [INFO][5195] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" Namespace="calico-system" Pod="csi-node-driver-6q4hq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q4hq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6q4hq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a49cae08-4a20-4c05-9f35-ae3ac5421522", ResourceVersion:"1131", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25", Pod:"csi-node-driver-6q4hq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b7b0039b4d", MAC:"fa:49:5c:88:e4:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:42.474502 containerd[1478]: 2025-09-10 00:41:42.466 [INFO][5195] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25" Namespace="calico-system" Pod="csi-node-driver-6q4hq" WorkloadEndpoint="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:42.498735 kubelet[2569]: I0910 00:41:42.498184 2569 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 00:41:42.501055 kubelet[2569]: E0910 00:41:42.500826 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:42.510358 containerd[1478]: time="2025-09-10T00:41:42.510204253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 10 00:41:42.510358 containerd[1478]: time="2025-09-10T00:41:42.510278652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 10 00:41:42.510637 containerd[1478]: time="2025-09-10T00:41:42.510317554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:42.510637 containerd[1478]: time="2025-09-10T00:41:42.510428389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 10 00:41:42.549517 systemd[1]: Started cri-containerd-79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25.scope - libcontainer container 79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25. Sep 10 00:41:42.575220 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:41:42.592999 containerd[1478]: time="2025-09-10T00:41:42.592017177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6q4hq,Uid:a49cae08-4a20-4c05-9f35-ae3ac5421522,Namespace:calico-system,Attempt:1,} returns sandbox id \"79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25\"" Sep 10 00:41:42.865213 containerd[1478]: time="2025-09-10T00:41:42.862417477Z" level=info msg="StopPodSandbox for \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\"" Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.080 [WARNING][5289] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6q4hq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a49cae08-4a20-4c05-9f35-ae3ac5421522", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25", Pod:"csi-node-driver-6q4hq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b7b0039b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.081 [INFO][5289] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.081 [INFO][5289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" iface="eth0" netns="" Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.081 [INFO][5289] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.081 [INFO][5289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.115 [INFO][5297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" HandleID="k8s-pod-network.b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.115 [INFO][5297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.116 [INFO][5297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.235 [WARNING][5297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" HandleID="k8s-pod-network.b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.235 [INFO][5297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" HandleID="k8s-pod-network.b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.237 [INFO][5297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:43.245218 containerd[1478]: 2025-09-10 00:41:43.241 [INFO][5289] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:43.245218 containerd[1478]: time="2025-09-10T00:41:43.245149734Z" level=info msg="TearDown network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\" successfully" Sep 10 00:41:43.245218 containerd[1478]: time="2025-09-10T00:41:43.245207762Z" level=info msg="StopPodSandbox for \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\" returns successfully" Sep 10 00:41:43.246037 containerd[1478]: time="2025-09-10T00:41:43.246003614Z" level=info msg="RemovePodSandbox for \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\"" Sep 10 00:41:43.248917 containerd[1478]: time="2025-09-10T00:41:43.248877438Z" level=info msg="Forcibly stopping sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\"" Sep 10 00:41:43.463761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount927528219.mount: Deactivated successfully. Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.288 [WARNING][5315] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6q4hq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a49cae08-4a20-4c05-9f35-ae3ac5421522", ResourceVersion:"1142", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25", Pod:"csi-node-driver-6q4hq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b7b0039b4d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.288 [INFO][5315] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.288 [INFO][5315] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" iface="eth0" netns="" Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.288 [INFO][5315] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.288 [INFO][5315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.594 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" HandleID="k8s-pod-network.b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.595 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.595 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.603 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" HandleID="k8s-pod-network.b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.603 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" HandleID="k8s-pod-network.b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Workload="localhost-k8s-csi--node--driver--6q4hq-eth0" Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.604 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:43.610791 containerd[1478]: 2025-09-10 00:41:43.607 [INFO][5315] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f" Sep 10 00:41:43.611556 containerd[1478]: time="2025-09-10T00:41:43.610832316Z" level=info msg="TearDown network for sandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\" successfully" Sep 10 00:41:43.645137 containerd[1478]: time="2025-09-10T00:41:43.645055970Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:41:43.645335 containerd[1478]: time="2025-09-10T00:41:43.645152438Z" level=info msg="RemovePodSandbox \"b8308230183e532e3422791b3f344ca8a2e4a1a59e7dabf85c40d0f05ea7c67f\" returns successfully" Sep 10 00:41:43.645981 containerd[1478]: time="2025-09-10T00:41:43.645954172Z" level=info msg="StopPodSandbox for \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\"" Sep 10 00:41:43.669873 systemd[1]: Started sshd@12-10.0.0.90:22-10.0.0.1:52362.service - OpenSSH per-connection server daemon (10.0.0.1:52362). Sep 10 00:41:43.727066 sshd[5352]: Accepted publickey for core from 10.0.0.1 port 52362 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:41:43.729453 sshd[5352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:41:43.735490 systemd-logind[1453]: New session 13 of user core. Sep 10 00:41:43.742352 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 00:41:44.036368 systemd-networkd[1405]: cali3b7b0039b4d: Gained IPv6LL Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.049 [WARNING][5343] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0", GenerateName:"calico-apiserver-66b8fdf8b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0742522a-e5f6-4d86-9672-4927d9011444", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b8fdf8b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1", Pod:"calico-apiserver-66b8fdf8b8-gh75h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11f2d5435ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.049 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.049 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" iface="eth0" netns="" Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.049 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.049 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.074 [INFO][5366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" HandleID="k8s-pod-network.32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.074 [INFO][5366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.075 [INFO][5366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.109 [WARNING][5366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" HandleID="k8s-pod-network.32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.109 [INFO][5366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" HandleID="k8s-pod-network.32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.113 [INFO][5366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:44.120062 containerd[1478]: 2025-09-10 00:41:44.116 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:44.120724 containerd[1478]: time="2025-09-10T00:41:44.120128271Z" level=info msg="TearDown network for sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\" successfully" Sep 10 00:41:44.120724 containerd[1478]: time="2025-09-10T00:41:44.120155102Z" level=info msg="StopPodSandbox for \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\" returns successfully" Sep 10 00:41:44.120968 containerd[1478]: time="2025-09-10T00:41:44.120947980Z" level=info msg="RemovePodSandbox for \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\"" Sep 10 00:41:44.121025 containerd[1478]: time="2025-09-10T00:41:44.120977436Z" level=info msg="Forcibly stopping sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\"" Sep 10 00:41:44.249088 sshd[5352]: pam_unix(sshd:session): session closed for user core Sep 10 00:41:44.258535 systemd[1]: sshd@12-10.0.0.90:22-10.0.0.1:52362.service: Deactivated successfully. Sep 10 00:41:44.260843 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:41:44.264755 systemd-logind[1453]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:41:44.268526 systemd[1]: Started sshd@13-10.0.0.90:22-10.0.0.1:52368.service - OpenSSH per-connection server daemon (10.0.0.1:52368). Sep 10 00:41:44.270632 systemd-logind[1453]: Removed session 13. Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.243 [WARNING][5385] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0", GenerateName:"calico-apiserver-66b8fdf8b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"0742522a-e5f6-4d86-9672-4927d9011444", ResourceVersion:"1147", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b8fdf8b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"421cd6f6aed94786b932f08c9323bf9d77c1147be83f47042c99f19c206e60c1", Pod:"calico-apiserver-66b8fdf8b8-gh75h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali11f2d5435ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.243 [INFO][5385] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.243 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" iface="eth0" netns="" Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.244 [INFO][5385] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.244 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.267 [INFO][5394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" HandleID="k8s-pod-network.32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.267 [INFO][5394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.267 [INFO][5394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.274 [WARNING][5394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" HandleID="k8s-pod-network.32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.274 [INFO][5394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" HandleID="k8s-pod-network.32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--gh75h-eth0" Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.277 [INFO][5394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:44.284208 containerd[1478]: 2025-09-10 00:41:44.280 [INFO][5385] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd" Sep 10 00:41:44.284208 containerd[1478]: time="2025-09-10T00:41:44.283905211Z" level=info msg="TearDown network for sandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\" successfully" Sep 10 00:41:44.304542 sshd[5403]: Accepted publickey for core from 10.0.0.1 port 52368 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:41:44.306545 sshd[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:41:44.310586 systemd-logind[1453]: New session 14 of user core. Sep 10 00:41:44.321342 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 00:41:44.423420 containerd[1478]: time="2025-09-10T00:41:44.423342894Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:41:44.423609 containerd[1478]: time="2025-09-10T00:41:44.423440456Z" level=info msg="RemovePodSandbox \"32f43c781cfd055b6a670a15cd9860e32b75d9790ab7e345465f9699c7d803bd\" returns successfully" Sep 10 00:41:44.424091 containerd[1478]: time="2025-09-10T00:41:44.424057077Z" level=info msg="StopPodSandbox for \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\"" Sep 10 00:41:44.453404 containerd[1478]: time="2025-09-10T00:41:44.453328256Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:44.460136 containerd[1478]: time="2025-09-10T00:41:44.459849081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 10 00:41:44.464144 containerd[1478]: time="2025-09-10T00:41:44.464090825Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:44.468979 containerd[1478]: time="2025-09-10T00:41:44.468885010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:44.469751 containerd[1478]: time="2025-09-10T00:41:44.469715569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.922940808s" Sep 10 00:41:44.469843 containerd[1478]: time="2025-09-10T00:41:44.469753309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 10 00:41:44.475094 containerd[1478]: time="2025-09-10T00:41:44.475029804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 00:41:44.485887 containerd[1478]: time="2025-09-10T00:41:44.485819754Z" level=info msg="CreateContainer within sandbox \"54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 00:41:44.513437 containerd[1478]: time="2025-09-10T00:41:44.513310583Z" level=info msg="CreateContainer within sandbox \"54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"a54f1aae41a538d3e1f42a8ac4ab961e09cfd486ead5d9e23b6afa0fef2ae1b4\"" Sep 10 00:41:44.513885 containerd[1478]: time="2025-09-10T00:41:44.513863403Z" level=info msg="StartContainer for \"a54f1aae41a538d3e1f42a8ac4ab961e09cfd486ead5d9e23b6afa0fef2ae1b4\"" Sep 10 00:41:44.555028 sshd[5403]: pam_unix(sshd:session): session closed for user core Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.478 [WARNING][5424] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" WorkloadEndpoint="localhost-k8s-whisker--79bff68756--4dck5-eth0" Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.479 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.479 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" iface="eth0" netns="" Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.479 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.479 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.521 [INFO][5438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" HandleID="k8s-pod-network.7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Workload="localhost-k8s-whisker--79bff68756--4dck5-eth0" Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.522 [INFO][5438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.522 [INFO][5438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.548 [WARNING][5438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" HandleID="k8s-pod-network.7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Workload="localhost-k8s-whisker--79bff68756--4dck5-eth0" Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.548 [INFO][5438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" HandleID="k8s-pod-network.7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Workload="localhost-k8s-whisker--79bff68756--4dck5-eth0" Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.556 [INFO][5438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:44.571809 containerd[1478]: 2025-09-10 00:41:44.562 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:44.572569 containerd[1478]: time="2025-09-10T00:41:44.572309975Z" level=info msg="TearDown network for sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\" successfully" Sep 10 00:41:44.572569 containerd[1478]: time="2025-09-10T00:41:44.572344518Z" level=info msg="StopPodSandbox for \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\" returns successfully" Sep 10 00:41:44.576873 systemd[1]: sshd@13-10.0.0.90:22-10.0.0.1:52368.service: Deactivated successfully. Sep 10 00:41:44.579144 containerd[1478]: time="2025-09-10T00:41:44.577313840Z" level=info msg="RemovePodSandbox for \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\"" Sep 10 00:41:44.580030 containerd[1478]: time="2025-09-10T00:41:44.579786663Z" level=info msg="Forcibly stopping sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\"" Sep 10 00:41:44.580559 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:41:44.582282 systemd-logind[1453]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:41:44.596707 systemd[1]: Started sshd@14-10.0.0.90:22-10.0.0.1:52370.service - OpenSSH per-connection server daemon (10.0.0.1:52370). Sep 10 00:41:44.608542 systemd-logind[1453]: Removed session 14. Sep 10 00:41:44.612764 systemd[1]: Started cri-containerd-a54f1aae41a538d3e1f42a8ac4ab961e09cfd486ead5d9e23b6afa0fef2ae1b4.scope - libcontainer container a54f1aae41a538d3e1f42a8ac4ab961e09cfd486ead5d9e23b6afa0fef2ae1b4. Sep 10 00:41:44.642402 sshd[5471]: Accepted publickey for core from 10.0.0.1 port 52370 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:41:44.644869 sshd[5471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:41:44.653009 systemd-logind[1453]: New session 15 of user core. Sep 10 00:41:44.657483 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 00:41:44.679023 containerd[1478]: time="2025-09-10T00:41:44.678966539Z" level=info msg="StartContainer for \"a54f1aae41a538d3e1f42a8ac4ab961e09cfd486ead5d9e23b6afa0fef2ae1b4\" returns successfully" Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.641 [WARNING][5477] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" WorkloadEndpoint="localhost-k8s-whisker--79bff68756--4dck5-eth0" Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.641 [INFO][5477] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.641 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" iface="eth0" netns="" Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.641 [INFO][5477] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.641 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.688 [INFO][5494] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" HandleID="k8s-pod-network.7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Workload="localhost-k8s-whisker--79bff68756--4dck5-eth0" Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.689 [INFO][5494] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.689 [INFO][5494] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.695 [WARNING][5494] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" HandleID="k8s-pod-network.7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Workload="localhost-k8s-whisker--79bff68756--4dck5-eth0" Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.695 [INFO][5494] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" HandleID="k8s-pod-network.7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Workload="localhost-k8s-whisker--79bff68756--4dck5-eth0" Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.697 [INFO][5494] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:44.704464 containerd[1478]: 2025-09-10 00:41:44.701 [INFO][5477] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba" Sep 10 00:41:44.704995 containerd[1478]: time="2025-09-10T00:41:44.704511205Z" level=info msg="TearDown network for sandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\" successfully" Sep 10 00:41:44.721630 containerd[1478]: time="2025-09-10T00:41:44.720930919Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:41:44.721630 containerd[1478]: time="2025-09-10T00:41:44.721046755Z" level=info msg="RemovePodSandbox \"7d9bd09389a73633ca74595b85f84698ad8e814f0ab2986edebf557a14f245ba\" returns successfully" Sep 10 00:41:44.722445 containerd[1478]: time="2025-09-10T00:41:44.722346921Z" level=info msg="StopPodSandbox for \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\"" Sep 10 00:41:44.829410 sshd[5471]: pam_unix(sshd:session): session closed for user core Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.783 [WARNING][5535] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0", GenerateName:"calico-apiserver-66b8fdf8b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3961797f-cb69-46a6-8831-a00deb4ca0a0", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b8fdf8b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980", Pod:"calico-apiserver-66b8fdf8b8-524kf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cae26fb9f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.783 [INFO][5535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.783 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" iface="eth0" netns="" Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.783 [INFO][5535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.783 [INFO][5535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.813 [INFO][5545] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" HandleID="k8s-pod-network.781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.815 [INFO][5545] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.815 [INFO][5545] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.822 [WARNING][5545] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" HandleID="k8s-pod-network.781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.822 [INFO][5545] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" HandleID="k8s-pod-network.781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.824 [INFO][5545] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:44.835908 containerd[1478]: 2025-09-10 00:41:44.828 [INFO][5535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:44.836664 containerd[1478]: time="2025-09-10T00:41:44.836004766Z" level=info msg="TearDown network for sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\" successfully" Sep 10 00:41:44.836664 containerd[1478]: time="2025-09-10T00:41:44.836038429Z" level=info msg="StopPodSandbox for \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\" returns successfully" Sep 10 00:41:44.837037 containerd[1478]: time="2025-09-10T00:41:44.836991237Z" level=info msg="RemovePodSandbox for \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\"" Sep 10 00:41:44.838431 containerd[1478]: time="2025-09-10T00:41:44.837039477Z" level=info msg="Forcibly stopping sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\"" Sep 10 00:41:44.837289 systemd-logind[1453]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:41:44.838405 systemd[1]: sshd@14-10.0.0.90:22-10.0.0.1:52370.service: Deactivated successfully. Sep 10 00:41:44.841695 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:41:44.842934 systemd-logind[1453]: Removed session 15. Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.878 [WARNING][5566] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0", GenerateName:"calico-apiserver-66b8fdf8b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"3961797f-cb69-46a6-8831-a00deb4ca0a0", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b8fdf8b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41e2b0d9c72dd8ede4d9306ebe44b8ea785c552a98560f9e4090f4e254e9e980", Pod:"calico-apiserver-66b8fdf8b8-524kf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3cae26fb9f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.878 [INFO][5566] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.878 [INFO][5566] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" iface="eth0" netns="" Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.878 [INFO][5566] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.879 [INFO][5566] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.906 [INFO][5576] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" HandleID="k8s-pod-network.781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.906 [INFO][5576] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.908 [INFO][5576] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.915 [WARNING][5576] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" HandleID="k8s-pod-network.781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.915 [INFO][5576] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" HandleID="k8s-pod-network.781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Workload="localhost-k8s-calico--apiserver--66b8fdf8b8--524kf-eth0" Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.916 [INFO][5576] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:44.925411 containerd[1478]: 2025-09-10 00:41:44.920 [INFO][5566] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40" Sep 10 00:41:44.925862 containerd[1478]: time="2025-09-10T00:41:44.925477241Z" level=info msg="TearDown network for sandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\" successfully" Sep 10 00:41:44.930057 containerd[1478]: time="2025-09-10T00:41:44.929993476Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:41:44.930057 containerd[1478]: time="2025-09-10T00:41:44.930072374Z" level=info msg="RemovePodSandbox \"781db95bb7bd1b6787ab712c77f32566c84f971a050d4c5905a1378ca02a5f40\" returns successfully" Sep 10 00:41:44.930673 containerd[1478]: time="2025-09-10T00:41:44.930634943Z" level=info msg="StopPodSandbox for \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\"" Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:44.980 [WARNING][5594] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b547m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5e8668c2-c5ca-4727-aa07-f9c264cfce9b", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386", Pod:"coredns-674b8bbfcf-b547m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali655d4ae250a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:44.980 [INFO][5594] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:44.980 [INFO][5594] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" iface="eth0" netns="" Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:44.980 [INFO][5594] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:44.980 [INFO][5594] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:45.009 [INFO][5603] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" HandleID="k8s-pod-network.4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:45.009 [INFO][5603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:45.010 [INFO][5603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:45.016 [WARNING][5603] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" HandleID="k8s-pod-network.4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:45.016 [INFO][5603] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" HandleID="k8s-pod-network.4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:45.018 [INFO][5603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:45.025088 containerd[1478]: 2025-09-10 00:41:45.021 [INFO][5594] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:45.025644 containerd[1478]: time="2025-09-10T00:41:45.025150312Z" level=info msg="TearDown network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\" successfully" Sep 10 00:41:45.025644 containerd[1478]: time="2025-09-10T00:41:45.025176620Z" level=info msg="StopPodSandbox for \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\" returns successfully" Sep 10 00:41:45.025887 containerd[1478]: time="2025-09-10T00:41:45.025839920Z" level=info msg="RemovePodSandbox for \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\"" Sep 10 00:41:45.025929 containerd[1478]: time="2025-09-10T00:41:45.025886827Z" level=info msg="Forcibly stopping sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\"" Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.063 [WARNING][5620] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--b547m-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5e8668c2-c5ca-4727-aa07-f9c264cfce9b", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"da0748c6b84462f9f07b627c9278cc01b930f869dd088d998408f56d30378386", Pod:"coredns-674b8bbfcf-b547m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali655d4ae250a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.064 [INFO][5620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.064 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" iface="eth0" netns="" Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.064 [INFO][5620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.064 [INFO][5620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.084 [INFO][5629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" HandleID="k8s-pod-network.4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.084 [INFO][5629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.084 [INFO][5629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.091 [WARNING][5629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" HandleID="k8s-pod-network.4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.091 [INFO][5629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" HandleID="k8s-pod-network.4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Workload="localhost-k8s-coredns--674b8bbfcf--b547m-eth0" Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.093 [INFO][5629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:45.099418 containerd[1478]: 2025-09-10 00:41:45.095 [INFO][5620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9" Sep 10 00:41:45.099418 containerd[1478]: time="2025-09-10T00:41:45.099394929Z" level=info msg="TearDown network for sandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\" successfully" Sep 10 00:41:45.108639 containerd[1478]: time="2025-09-10T00:41:45.108566265Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:41:45.108639 containerd[1478]: time="2025-09-10T00:41:45.108649080Z" level=info msg="RemovePodSandbox \"4814dd795bad5880d5353f9e5ee4bb74a432587bd8ad15200e24198a958edde9\" returns successfully" Sep 10 00:41:45.109243 containerd[1478]: time="2025-09-10T00:41:45.109184891Z" level=info msg="StopPodSandbox for \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\"" Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.145 [WARNING][5647] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"39471f10-8655-44e1-b957-a2e56d511c05", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a", Pod:"coredns-674b8bbfcf-fkc6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8d11c7396c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.146 [INFO][5647] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.146 [INFO][5647] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" iface="eth0" netns="" Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.146 [INFO][5647] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.146 [INFO][5647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.169 [INFO][5656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" HandleID="k8s-pod-network.c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.169 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.169 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.181 [WARNING][5656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" HandleID="k8s-pod-network.c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.181 [INFO][5656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" HandleID="k8s-pod-network.c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.182 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:45.188304 containerd[1478]: 2025-09-10 00:41:45.185 [INFO][5647] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:45.188854 containerd[1478]: time="2025-09-10T00:41:45.188363156Z" level=info msg="TearDown network for sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\" successfully" Sep 10 00:41:45.188854 containerd[1478]: time="2025-09-10T00:41:45.188402891Z" level=info msg="StopPodSandbox for \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\" returns successfully" Sep 10 00:41:45.189122 containerd[1478]: time="2025-09-10T00:41:45.189008362Z" level=info msg="RemovePodSandbox for \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\"" Sep 10 00:41:45.189122 containerd[1478]: time="2025-09-10T00:41:45.189058706Z" level=info msg="Forcibly stopping sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\"" Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.229 [WARNING][5673] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"39471f10-8655-44e1-b957-a2e56d511c05", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 40, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43eaa6df40676d991919cad7aae43da70bdc2b9d13a5d66df796186cd7a0a25a", Pod:"coredns-674b8bbfcf-fkc6w", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie8d11c7396c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.229 [INFO][5673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.229 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" iface="eth0" netns="" Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.229 [INFO][5673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.229 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.260 [INFO][5682] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" HandleID="k8s-pod-network.c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.261 [INFO][5682] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.261 [INFO][5682] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.267 [WARNING][5682] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" HandleID="k8s-pod-network.c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.268 [INFO][5682] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" HandleID="k8s-pod-network.c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Workload="localhost-k8s-coredns--674b8bbfcf--fkc6w-eth0" Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.270 [INFO][5682] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:45.277115 containerd[1478]: 2025-09-10 00:41:45.273 [INFO][5673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23" Sep 10 00:41:45.278447 containerd[1478]: time="2025-09-10T00:41:45.277166836Z" level=info msg="TearDown network for sandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\" successfully" Sep 10 00:41:45.281742 containerd[1478]: time="2025-09-10T00:41:45.281689857Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:41:45.281796 containerd[1478]: time="2025-09-10T00:41:45.281751973Z" level=info msg="RemovePodSandbox \"c4a8088cd03d1e971e0b44601e7e818654511a76e2d0b152a4818897f86b6b23\" returns successfully" Sep 10 00:41:45.282320 containerd[1478]: time="2025-09-10T00:41:45.282293835Z" level=info msg="StopPodSandbox for \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\"" Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.322 [WARNING][5700] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0", GenerateName:"calico-kube-controllers-66f64968dc-", Namespace:"calico-system", SelfLink:"", UID:"85529152-632b-471b-a89e-05d8b212c595", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66f64968dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9", Pod:"calico-kube-controllers-66f64968dc-xxlgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4606df1c89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.322 [INFO][5700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.323 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" iface="eth0" netns="" Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.323 [INFO][5700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.323 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.350 [INFO][5708] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" HandleID="k8s-pod-network.4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.350 [INFO][5708] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.351 [INFO][5708] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.357 [WARNING][5708] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" HandleID="k8s-pod-network.4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.357 [INFO][5708] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" HandleID="k8s-pod-network.4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.360 [INFO][5708] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:45.367595 containerd[1478]: 2025-09-10 00:41:45.363 [INFO][5700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:45.367595 containerd[1478]: time="2025-09-10T00:41:45.367563863Z" level=info msg="TearDown network for sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\" successfully" Sep 10 00:41:45.368293 containerd[1478]: time="2025-09-10T00:41:45.367613436Z" level=info msg="StopPodSandbox for \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\" returns successfully" Sep 10 00:41:45.368443 containerd[1478]: time="2025-09-10T00:41:45.368380589Z" level=info msg="RemovePodSandbox for \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\"" Sep 10 00:41:45.368443 containerd[1478]: time="2025-09-10T00:41:45.368421015Z" level=info msg="Forcibly stopping sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\"" Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.412 [WARNING][5726] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0", GenerateName:"calico-kube-controllers-66f64968dc-", Namespace:"calico-system", SelfLink:"", UID:"85529152-632b-471b-a89e-05d8b212c595", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"66f64968dc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"522b88145347c6119761b02223486e416a84fe8e6f2ddf3f1fb76962add579c9", Pod:"calico-kube-controllers-66f64968dc-xxlgr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia4606df1c89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.412 [INFO][5726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.412 [INFO][5726] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" iface="eth0" netns="" Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.412 [INFO][5726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.412 [INFO][5726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.439 [INFO][5735] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" HandleID="k8s-pod-network.4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.439 [INFO][5735] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.439 [INFO][5735] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.447 [WARNING][5735] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" HandleID="k8s-pod-network.4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.448 [INFO][5735] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" HandleID="k8s-pod-network.4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Workload="localhost-k8s-calico--kube--controllers--66f64968dc--xxlgr-eth0" Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.449 [INFO][5735] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:45.455830 containerd[1478]: 2025-09-10 00:41:45.452 [INFO][5726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739" Sep 10 00:41:45.456566 containerd[1478]: time="2025-09-10T00:41:45.455889529Z" level=info msg="TearDown network for sandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\" successfully" Sep 10 00:41:45.460244 containerd[1478]: time="2025-09-10T00:41:45.460172882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:41:45.460310 containerd[1478]: time="2025-09-10T00:41:45.460282788Z" level=info msg="RemovePodSandbox \"4b51f5370a6cb04aea8582ac399c1d24e9a080c225bdf039859e2a997b55a739\" returns successfully" Sep 10 00:41:45.460942 containerd[1478]: time="2025-09-10T00:41:45.460892166Z" level=info msg="StopPodSandbox for \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\"" Sep 10 00:41:45.539614 kubelet[2569]: I0910 00:41:45.536333 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-5swrp" podStartSLOduration=31.976384581 podStartE2EDuration="43.536310407s" podCreationTimestamp="2025-09-10 00:41:02 +0000 UTC" firstStartedPulling="2025-09-10 00:41:32.914920796 +0000 UTC m=+50.180354758" lastFinishedPulling="2025-09-10 00:41:44.474846622 +0000 UTC m=+61.740280584" observedRunningTime="2025-09-10 00:41:45.53509184 +0000 UTC m=+62.800525802" watchObservedRunningTime="2025-09-10 00:41:45.536310407 +0000 UTC m=+62.801744379" Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.498 [WARNING][5752] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--5swrp-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cc162702-bc71-43f1-9f9b-1556715e5f12", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f", Pod:"goldmane-54d579b49d-5swrp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4175ae473ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.498 [INFO][5752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.498 [INFO][5752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" iface="eth0" netns="" Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.498 [INFO][5752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.498 [INFO][5752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.525 [INFO][5760] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" HandleID="k8s-pod-network.f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.525 [INFO][5760] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.525 [INFO][5760] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.531 [WARNING][5760] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" HandleID="k8s-pod-network.f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.531 [INFO][5760] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" HandleID="k8s-pod-network.f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.535 [INFO][5760] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:45.554371 containerd[1478]: 2025-09-10 00:41:45.546 [INFO][5752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:45.555498 containerd[1478]: time="2025-09-10T00:41:45.554485395Z" level=info msg="TearDown network for sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\" successfully" Sep 10 00:41:45.555498 containerd[1478]: time="2025-09-10T00:41:45.554522835Z" level=info msg="StopPodSandbox for \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\" returns successfully" Sep 10 00:41:45.555498 containerd[1478]: time="2025-09-10T00:41:45.555111584Z" level=info msg="RemovePodSandbox for \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\"" Sep 10 00:41:45.555498 containerd[1478]: time="2025-09-10T00:41:45.555138465Z" level=info msg="Forcibly stopping sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\"" Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.609 [WARNING][5796] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--5swrp-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"cc162702-bc71-43f1-9f9b-1556715e5f12", ResourceVersion:"1182", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 0, 41, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54905d3668924109b5aaf9340a756d1d6879a7190c9eea197fa5b16413fd130f", Pod:"goldmane-54d579b49d-5swrp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4175ae473ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.609 [INFO][5796] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.609 [INFO][5796] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" iface="eth0" netns="" Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.609 [INFO][5796] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.609 [INFO][5796] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.635 [INFO][5809] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" HandleID="k8s-pod-network.f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.636 [INFO][5809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.636 [INFO][5809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.644 [WARNING][5809] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" HandleID="k8s-pod-network.f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.644 [INFO][5809] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" HandleID="k8s-pod-network.f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Workload="localhost-k8s-goldmane--54d579b49d--5swrp-eth0" Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.646 [INFO][5809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 00:41:45.652712 containerd[1478]: 2025-09-10 00:41:45.649 [INFO][5796] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2" Sep 10 00:41:45.653146 containerd[1478]: time="2025-09-10T00:41:45.652734342Z" level=info msg="TearDown network for sandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\" successfully" Sep 10 00:41:45.657758 containerd[1478]: time="2025-09-10T00:41:45.657728062Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 10 00:41:45.657825 containerd[1478]: time="2025-09-10T00:41:45.657786201Z" level=info msg="RemovePodSandbox \"f50caa5e65a7b140d9d446b8c09b581958744abeeab6c955eed84381eb6136e2\" returns successfully" Sep 10 00:41:45.895968 systemd[1]: run-containerd-runc-k8s.io-a54f1aae41a538d3e1f42a8ac4ab961e09cfd486ead5d9e23b6afa0fef2ae1b4-runc.19TaQw.mount: Deactivated successfully. Sep 10 00:41:46.900763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3536540749.mount: Deactivated successfully. Sep 10 00:41:46.951275 containerd[1478]: time="2025-09-10T00:41:46.951126962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:46.952772 containerd[1478]: time="2025-09-10T00:41:46.952686892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 10 00:41:46.954166 containerd[1478]: time="2025-09-10T00:41:46.954089787Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:46.957300 containerd[1478]: time="2025-09-10T00:41:46.957238127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:46.958225 containerd[1478]: time="2025-09-10T00:41:46.958125637Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 2.483055047s" Sep 10 00:41:46.958225 containerd[1478]: time="2025-09-10T00:41:46.958218692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 10 00:41:46.978414 containerd[1478]: time="2025-09-10T00:41:46.976840503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 00:41:46.982982 containerd[1478]: time="2025-09-10T00:41:46.982921421Z" level=info msg="CreateContainer within sandbox \"ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 00:41:46.998170 containerd[1478]: time="2025-09-10T00:41:46.998106824Z" level=info msg="CreateContainer within sandbox \"ea600cee56ec04f42bd7ce0caa3b6c4a268c604c72cebd4fd0a58fa4b163a14f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5d4e9181c273d413d98ca428f945e300f185d0f208a59899fdd0d036045428d5\"" Sep 10 00:41:47.000486 containerd[1478]: time="2025-09-10T00:41:47.000439047Z" level=info msg="StartContainer for \"5d4e9181c273d413d98ca428f945e300f185d0f208a59899fdd0d036045428d5\"" Sep 10 00:41:47.035357 systemd[1]: Started cri-containerd-5d4e9181c273d413d98ca428f945e300f185d0f208a59899fdd0d036045428d5.scope - libcontainer container 5d4e9181c273d413d98ca428f945e300f185d0f208a59899fdd0d036045428d5. Sep 10 00:41:47.088738 containerd[1478]: time="2025-09-10T00:41:47.088684483Z" level=info msg="StartContainer for \"5d4e9181c273d413d98ca428f945e300f185d0f208a59899fdd0d036045428d5\" returns successfully" Sep 10 00:41:47.543997 kubelet[2569]: I0910 00:41:47.543747 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5f7cb9bf59-6278x" podStartSLOduration=2.460543937 podStartE2EDuration="19.543728508s" podCreationTimestamp="2025-09-10 00:41:28 +0000 UTC" firstStartedPulling="2025-09-10 00:41:29.893389454 +0000 UTC m=+47.158823416" lastFinishedPulling="2025-09-10 00:41:46.976574015 +0000 UTC m=+64.242007987" observedRunningTime="2025-09-10 00:41:47.54290758 +0000 UTC m=+64.808341542" watchObservedRunningTime="2025-09-10 00:41:47.543728508 +0000 UTC m=+64.809162470" Sep 10 00:41:48.851800 containerd[1478]: time="2025-09-10T00:41:48.851731311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:48.852789 containerd[1478]: time="2025-09-10T00:41:48.852715267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 10 00:41:48.855814 containerd[1478]: time="2025-09-10T00:41:48.855781880Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:48.858839 containerd[1478]: time="2025-09-10T00:41:48.858778011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:48.859609 containerd[1478]: time="2025-09-10T00:41:48.859570959Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.882676415s" Sep 10 00:41:48.859670 containerd[1478]: time="2025-09-10T00:41:48.859616805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 10 00:41:48.865507 containerd[1478]: time="2025-09-10T00:41:48.865461330Z" level=info msg="CreateContainer within sandbox \"79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 00:41:48.885658 containerd[1478]: time="2025-09-10T00:41:48.885603980Z" level=info msg="CreateContainer within sandbox \"79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"96324fab5a6cb01e7037f9b2b31f7e2a2718a54a57c79e40921e333eb246a773\"" Sep 10 00:41:48.886138 containerd[1478]: time="2025-09-10T00:41:48.886067550Z" level=info msg="StartContainer for \"96324fab5a6cb01e7037f9b2b31f7e2a2718a54a57c79e40921e333eb246a773\"" Sep 10 00:41:48.929448 systemd[1]: Started cri-containerd-96324fab5a6cb01e7037f9b2b31f7e2a2718a54a57c79e40921e333eb246a773.scope - libcontainer container 96324fab5a6cb01e7037f9b2b31f7e2a2718a54a57c79e40921e333eb246a773. Sep 10 00:41:48.969318 containerd[1478]: time="2025-09-10T00:41:48.969260237Z" level=info msg="StartContainer for \"96324fab5a6cb01e7037f9b2b31f7e2a2718a54a57c79e40921e333eb246a773\" returns successfully" Sep 10 00:41:48.975752 containerd[1478]: time="2025-09-10T00:41:48.975435042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 00:41:49.841616 systemd[1]: Started sshd@15-10.0.0.90:22-10.0.0.1:52378.service - OpenSSH per-connection server daemon (10.0.0.1:52378). Sep 10 00:41:49.898524 sshd[5959]: Accepted publickey for core from 10.0.0.1 port 52378 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:41:49.900684 sshd[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:41:49.905311 systemd-logind[1453]: New session 16 of user core. Sep 10 00:41:49.912344 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 00:41:50.322020 sshd[5959]: pam_unix(sshd:session): session closed for user core Sep 10 00:41:50.326818 systemd[1]: sshd@15-10.0.0.90:22-10.0.0.1:52378.service: Deactivated successfully. Sep 10 00:41:50.329058 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:41:50.329980 systemd-logind[1453]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:41:50.330926 systemd-logind[1453]: Removed session 16. Sep 10 00:41:52.743963 containerd[1478]: time="2025-09-10T00:41:52.743881502Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:52.835895 containerd[1478]: time="2025-09-10T00:41:52.835794476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 10 00:41:52.892383 containerd[1478]: time="2025-09-10T00:41:52.892308331Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:52.962683 containerd[1478]: time="2025-09-10T00:41:52.962637202Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:41:52.972111 containerd[1478]: time="2025-09-10T00:41:52.972078983Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.996599137s" Sep 10 00:41:52.972111 containerd[1478]: time="2025-09-10T00:41:52.972108098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 10 00:41:53.109168 containerd[1478]: time="2025-09-10T00:41:53.109115076Z" level=info msg="CreateContainer within sandbox \"79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 00:41:53.674013 containerd[1478]: time="2025-09-10T00:41:53.673916487Z" level=info msg="CreateContainer within sandbox \"79f73a98f2cae6758bee7e65d483e22878fc44e0d4fe802515348a993bc54d25\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"704e2018c24d4be7ca993bafdb243734485fa6b491b1b1b40478522eb02fc30f\"" Sep 10 00:41:53.676905 containerd[1478]: time="2025-09-10T00:41:53.675254692Z" level=info msg="StartContainer for \"704e2018c24d4be7ca993bafdb243734485fa6b491b1b1b40478522eb02fc30f\"" Sep 10 00:41:53.719580 systemd[1]: Started cri-containerd-704e2018c24d4be7ca993bafdb243734485fa6b491b1b1b40478522eb02fc30f.scope - libcontainer container 704e2018c24d4be7ca993bafdb243734485fa6b491b1b1b40478522eb02fc30f. Sep 10 00:41:53.761785 containerd[1478]: time="2025-09-10T00:41:53.761680565Z" level=info msg="StartContainer for \"704e2018c24d4be7ca993bafdb243734485fa6b491b1b1b40478522eb02fc30f\" returns successfully" Sep 10 00:41:54.268125 kubelet[2569]: I0910 00:41:54.267771 2569 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 00:41:54.270913 kubelet[2569]: I0910 00:41:54.270865 2569 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 00:41:54.567023 kubelet[2569]: I0910 00:41:54.566909 2569 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6q4hq" podStartSLOduration=42.189659749 podStartE2EDuration="52.566889581s" podCreationTimestamp="2025-09-10 00:41:02 +0000 UTC" firstStartedPulling="2025-09-10 00:41:42.595383227 +0000 UTC m=+59.860817190" lastFinishedPulling="2025-09-10 00:41:52.97261306 +0000 UTC m=+70.238047022" observedRunningTime="2025-09-10 00:41:54.566711313 +0000 UTC m=+71.832145285" watchObservedRunningTime="2025-09-10 00:41:54.566889581 +0000 UTC m=+71.832323543" Sep 10 00:41:55.336481 systemd[1]: Started sshd@16-10.0.0.90:22-10.0.0.1:45928.service - OpenSSH per-connection server daemon (10.0.0.1:45928). Sep 10 00:41:55.400152 sshd[6028]: Accepted publickey for core from 10.0.0.1 port 45928 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:41:55.402617 sshd[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:41:55.408408 systemd-logind[1453]: New session 17 of user core. Sep 10 00:41:55.415890 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 00:41:55.628819 sshd[6028]: pam_unix(sshd:session): session closed for user core Sep 10 00:41:55.633870 systemd[1]: sshd@16-10.0.0.90:22-10.0.0.1:45928.service: Deactivated successfully. Sep 10 00:41:55.636607 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:41:55.637332 systemd-logind[1453]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:41:55.638716 systemd-logind[1453]: Removed session 17. Sep 10 00:41:56.875478 kubelet[2569]: E0910 00:41:56.875423 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:41:59.876577 kubelet[2569]: E0910 00:41:59.876499 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:00.641754 systemd[1]: Started sshd@17-10.0.0.90:22-10.0.0.1:56142.service - OpenSSH per-connection server daemon (10.0.0.1:56142). Sep 10 00:42:00.705127 sshd[6064]: Accepted publickey for core from 10.0.0.1 port 56142 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:42:00.707632 sshd[6064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:42:00.712304 systemd-logind[1453]: New session 18 of user core. Sep 10 00:42:00.717328 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 00:42:00.983002 sshd[6064]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:00.988327 systemd[1]: sshd@17-10.0.0.90:22-10.0.0.1:56142.service: Deactivated successfully. Sep 10 00:42:00.990696 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:42:00.991647 systemd-logind[1453]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:42:00.993094 systemd-logind[1453]: Removed session 18. Sep 10 00:42:04.270495 systemd[1]: run-containerd-runc-k8s.io-ccba3767fb2f43f17b214a33990403b25d43afe5172dbdc94ed491aa99cea8e9-runc.kLmSUL.mount: Deactivated successfully. Sep 10 00:42:06.011002 systemd[1]: Started sshd@18-10.0.0.90:22-10.0.0.1:56152.service - OpenSSH per-connection server daemon (10.0.0.1:56152). Sep 10 00:42:06.089662 sshd[6101]: Accepted publickey for core from 10.0.0.1 port 56152 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:42:06.093496 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:42:06.098396 systemd-logind[1453]: New session 19 of user core. Sep 10 00:42:06.109442 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:42:06.444382 sshd[6101]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:06.450071 systemd[1]: sshd@18-10.0.0.90:22-10.0.0.1:56152.service: Deactivated successfully. Sep 10 00:42:06.453341 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:42:06.454387 systemd-logind[1453]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:42:06.455934 systemd-logind[1453]: Removed session 19. Sep 10 00:42:11.462458 systemd[1]: Started sshd@19-10.0.0.90:22-10.0.0.1:59652.service - OpenSSH per-connection server daemon (10.0.0.1:59652). Sep 10 00:42:11.516509 sshd[6122]: Accepted publickey for core from 10.0.0.1 port 59652 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:42:11.518392 sshd[6122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:42:11.522788 systemd-logind[1453]: New session 20 of user core. Sep 10 00:42:11.535444 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 00:42:11.704676 sshd[6122]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:11.719389 systemd[1]: sshd@19-10.0.0.90:22-10.0.0.1:59652.service: Deactivated successfully. Sep 10 00:42:11.721649 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:42:11.723376 systemd-logind[1453]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:42:11.730579 systemd[1]: Started sshd@20-10.0.0.90:22-10.0.0.1:59654.service - OpenSSH per-connection server daemon (10.0.0.1:59654). Sep 10 00:42:11.732472 systemd-logind[1453]: Removed session 20. Sep 10 00:42:11.770487 sshd[6136]: Accepted publickey for core from 10.0.0.1 port 59654 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:42:11.772650 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:42:11.777908 systemd-logind[1453]: New session 21 of user core. Sep 10 00:42:11.787371 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 00:42:12.163745 sshd[6136]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:12.178364 systemd[1]: sshd@20-10.0.0.90:22-10.0.0.1:59654.service: Deactivated successfully. Sep 10 00:42:12.181216 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:42:12.183207 systemd-logind[1453]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:42:12.188665 systemd[1]: Started sshd@21-10.0.0.90:22-10.0.0.1:59664.service - OpenSSH per-connection server daemon (10.0.0.1:59664). Sep 10 00:42:12.190726 systemd-logind[1453]: Removed session 21. Sep 10 00:42:12.246600 sshd[6149]: Accepted publickey for core from 10.0.0.1 port 59664 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:42:12.248788 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:42:12.254361 systemd-logind[1453]: New session 22 of user core. Sep 10 00:42:12.266557 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 00:42:13.320279 sshd[6149]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:13.328805 systemd[1]: sshd@21-10.0.0.90:22-10.0.0.1:59664.service: Deactivated successfully. Sep 10 00:42:13.331068 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:42:13.332783 systemd-logind[1453]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:42:13.341615 systemd[1]: Started sshd@22-10.0.0.90:22-10.0.0.1:59672.service - OpenSSH per-connection server daemon (10.0.0.1:59672). Sep 10 00:42:13.342772 systemd-logind[1453]: Removed session 22. Sep 10 00:42:13.378744 sshd[6175]: Accepted publickey for core from 10.0.0.1 port 59672 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:42:13.381255 sshd[6175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:42:13.386628 systemd-logind[1453]: New session 23 of user core. Sep 10 00:42:13.397417 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 00:42:14.183879 sshd[6175]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:14.193474 systemd[1]: sshd@22-10.0.0.90:22-10.0.0.1:59672.service: Deactivated successfully. Sep 10 00:42:14.195650 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:42:14.197482 systemd-logind[1453]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:42:14.204481 systemd[1]: Started sshd@23-10.0.0.90:22-10.0.0.1:59688.service - OpenSSH per-connection server daemon (10.0.0.1:59688). Sep 10 00:42:14.206091 systemd-logind[1453]: Removed session 23. Sep 10 00:42:14.240648 sshd[6189]: Accepted publickey for core from 10.0.0.1 port 59688 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:42:14.242325 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:42:14.246732 systemd-logind[1453]: New session 24 of user core. Sep 10 00:42:14.253305 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 00:42:14.385982 sshd[6189]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:14.390924 systemd[1]: sshd@23-10.0.0.90:22-10.0.0.1:59688.service: Deactivated successfully. Sep 10 00:42:14.393128 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:42:14.393818 systemd-logind[1453]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:42:14.394973 systemd-logind[1453]: Removed session 24. Sep 10 00:42:16.568169 systemd[1]: run-containerd-runc-k8s.io-a54f1aae41a538d3e1f42a8ac4ab961e09cfd486ead5d9e23b6afa0fef2ae1b4-runc.RKAQZ1.mount: Deactivated successfully. Sep 10 00:42:16.876750 kubelet[2569]: E0910 00:42:16.876503 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:19.398289 systemd[1]: Started sshd@24-10.0.0.90:22-10.0.0.1:59698.service - OpenSSH per-connection server daemon (10.0.0.1:59698). Sep 10 00:42:19.442584 sshd[6226]: Accepted publickey for core from 10.0.0.1 port 59698 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:42:19.444563 sshd[6226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:42:19.449694 systemd-logind[1453]: New session 25 of user core. Sep 10 00:42:19.456373 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 00:42:19.607810 sshd[6226]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:19.612581 systemd[1]: sshd@24-10.0.0.90:22-10.0.0.1:59698.service: Deactivated successfully. Sep 10 00:42:19.614570 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:42:19.615262 systemd-logind[1453]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:42:19.616268 systemd-logind[1453]: Removed session 25. Sep 10 00:42:22.876764 kubelet[2569]: E0910 00:42:22.876690 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:24.620142 systemd[1]: Started sshd@25-10.0.0.90:22-10.0.0.1:57428.service - OpenSSH per-connection server daemon (10.0.0.1:57428). Sep 10 00:42:24.657118 sshd[6244]: Accepted publickey for core from 10.0.0.1 port 57428 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:42:24.658843 sshd[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:42:24.662870 systemd-logind[1453]: New session 26 of user core. Sep 10 00:42:24.672369 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 10 00:42:24.816110 sshd[6244]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:24.820106 systemd[1]: sshd@25-10.0.0.90:22-10.0.0.1:57428.service: Deactivated successfully. Sep 10 00:42:24.822123 systemd[1]: session-26.scope: Deactivated successfully. Sep 10 00:42:24.822758 systemd-logind[1453]: Session 26 logged out. Waiting for processes to exit. Sep 10 00:42:24.823763 systemd-logind[1453]: Removed session 26. Sep 10 00:42:26.876451 kubelet[2569]: E0910 00:42:26.876402 2569 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 00:42:29.829553 systemd[1]: Started sshd@26-10.0.0.90:22-10.0.0.1:57430.service - OpenSSH per-connection server daemon (10.0.0.1:57430). Sep 10 00:42:29.886946 sshd[6281]: Accepted publickey for core from 10.0.0.1 port 57430 ssh2: RSA SHA256:8lYmw5fyCyWfPmBBOTh1KYsG06iZ45OCbq9sG6CkCSY Sep 10 00:42:29.889171 sshd[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:42:29.894280 systemd-logind[1453]: New session 27 of user core. Sep 10 00:42:29.906453 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 10 00:42:30.193912 sshd[6281]: pam_unix(sshd:session): session closed for user core Sep 10 00:42:30.199057 systemd[1]: sshd@26-10.0.0.90:22-10.0.0.1:57430.service: Deactivated successfully. Sep 10 00:42:30.202256 systemd[1]: session-27.scope: Deactivated successfully. Sep 10 00:42:30.204062 systemd-logind[1453]: Session 27 logged out. Waiting for processes to exit. Sep 10 00:42:30.206375 systemd-logind[1453]: Removed session 27.