Jul 14 22:35:51.933371 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 20:23:49 -00 2025 Jul 14 22:35:51.933399 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:35:51.933414 kernel: BIOS-provided physical RAM map: Jul 14 22:35:51.933422 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 14 22:35:51.933430 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 14 22:35:51.933439 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 14 22:35:51.933449 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 14 22:35:51.933458 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 14 22:35:51.933466 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 14 22:35:51.933475 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 14 22:35:51.933486 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 14 22:35:51.933495 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 14 22:35:51.933508 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 14 22:35:51.933517 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 14 22:35:51.933530 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 14 22:35:51.933540 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 14 22:35:51.933567 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 14 22:35:51.933577 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 14 22:35:51.933586 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 14 22:35:51.933595 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 14 22:35:51.933604 kernel: NX (Execute Disable) protection: active Jul 14 22:35:51.933613 kernel: APIC: Static calls initialized Jul 14 22:35:51.933622 kernel: efi: EFI v2.7 by EDK II Jul 14 22:35:51.933631 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jul 14 22:35:51.933640 kernel: SMBIOS 2.8 present. Jul 14 22:35:51.933649 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 14 22:35:51.933658 kernel: Hypervisor detected: KVM Jul 14 22:35:51.933671 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 14 22:35:51.933680 kernel: kvm-clock: using sched offset of 5535695372 cycles Jul 14 22:35:51.933690 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 14 22:35:51.933700 kernel: tsc: Detected 2794.750 MHz processor Jul 14 22:35:51.933709 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:35:51.933719 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:35:51.933728 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 14 22:35:51.933738 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 14 22:35:51.933747 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:35:51.933759 kernel: Using GB pages for direct mapping Jul 14 22:35:51.933769 kernel: Secure boot disabled Jul 14 22:35:51.933778 kernel: ACPI: Early table checksum verification disabled Jul 14 22:35:51.933788 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 14 22:35:51.933802 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 14 22:35:51.933812 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:35:51.933822 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:35:51.933835 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 14 22:35:51.933844 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:35:51.933858 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:35:51.933867 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:35:51.933877 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:35:51.933887 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 14 22:35:51.933897 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 14 22:35:51.933910 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 14 22:35:51.933919 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 14 22:35:51.933929 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 14 22:35:51.933939 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 14 22:35:51.933949 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 14 22:35:51.933958 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 14 22:35:51.933968 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 14 22:35:51.933977 kernel: No NUMA configuration found Jul 14 22:35:51.933990 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 14 22:35:51.934003 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 14 22:35:51.934013 kernel: Zone ranges: Jul 14 22:35:51.934023 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:35:51.934033 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 14 22:35:51.934043 kernel: Normal empty Jul 14 22:35:51.934052 kernel: Movable zone start for each node Jul 14 22:35:51.934071 kernel: Early memory node ranges Jul 14 22:35:51.934081 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 14 22:35:51.934090 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 14 22:35:51.934100 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 14 22:35:51.934113 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 14 22:35:51.934122 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 14 22:35:51.934132 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 14 22:35:51.934145 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 14 22:35:51.934154 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:35:51.934164 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 14 22:35:51.934174 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 14 22:35:51.934183 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:35:51.934193 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 14 22:35:51.934206 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 14 22:35:51.934216 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 14 22:35:51.934226 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 14 22:35:51.934236 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 14 22:35:51.934245 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:35:51.934255 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 14 22:35:51.934265 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 14 22:35:51.934275 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 14 22:35:51.934284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 14 22:35:51.934297 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 14 22:35:51.934307 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:35:51.934317 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 14 22:35:51.934326 kernel: TSC deadline timer available Jul 14 22:35:51.934336 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 14 22:35:51.934346 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 14 22:35:51.934356 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 14 22:35:51.934365 kernel: kvm-guest: setup PV sched yield Jul 14 22:35:51.934375 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 14 22:35:51.934388 kernel: Booting paravirtualized kernel on KVM Jul 14 22:35:51.934398 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:35:51.934408 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 14 22:35:51.934417 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 14 22:35:51.934427 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 14 22:35:51.934437 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 14 22:35:51.934446 kernel: kvm-guest: PV spinlocks enabled Jul 14 22:35:51.934456 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 14 22:35:51.934467 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:35:51.934483 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:35:51.934493 kernel: random: crng init done Jul 14 22:35:51.934503 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 22:35:51.934513 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:35:51.934523 kernel: Fallback order for Node 0: 0 Jul 14 22:35:51.934532 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 14 22:35:51.934542 kernel: Policy zone: DMA32 Jul 14 22:35:51.934564 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:35:51.934592 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 166140K reserved, 0K cma-reserved) Jul 14 22:35:51.934602 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 22:35:51.934611 kernel: ftrace: allocating 37970 entries in 149 pages Jul 14 22:35:51.934621 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 22:35:51.934631 kernel: Dynamic Preempt: voluntary Jul 14 22:35:51.934651 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:35:51.934669 kernel: rcu: RCU event tracing is enabled. Jul 14 22:35:51.934680 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 22:35:51.934690 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:35:51.934700 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:35:51.934710 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:35:51.934721 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:35:51.934734 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 22:35:51.934744 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 14 22:35:51.934761 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 22:35:51.934771 kernel: Console: colour dummy device 80x25 Jul 14 22:35:51.934781 kernel: printk: console [ttyS0] enabled Jul 14 22:35:51.934794 kernel: ACPI: Core revision 20230628 Jul 14 22:35:51.934804 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 14 22:35:51.934815 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:35:51.934825 kernel: x2apic enabled Jul 14 22:35:51.934835 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 22:35:51.934845 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 14 22:35:51.934855 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 14 22:35:51.934865 kernel: kvm-guest: setup PV IPIs Jul 14 22:35:51.934876 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:35:51.934889 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 14 22:35:51.934899 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 14 22:35:51.934909 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 14 22:35:51.934920 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 14 22:35:51.934930 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 14 22:35:51.934940 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:35:51.934950 kernel: Spectre V2 : Mitigation: Retpolines Jul 14 22:35:51.934961 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 14 22:35:51.934971 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 14 22:35:51.934984 kernel: RETBleed: Mitigation: untrained return thunk Jul 14 22:35:51.934995 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:35:51.935005 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 22:35:51.935015 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 14 22:35:51.935029 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 14 22:35:51.935039 kernel: x86/bugs: return thunk changed Jul 14 22:35:51.935049 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 14 22:35:51.935068 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:35:51.935078 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:35:51.935091 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:35:51.935102 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:35:51.935112 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 22:35:51.935123 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:35:51.935133 kernel: pid_max: default: 32768 minimum: 301 Jul 14 22:35:51.935143 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 22:35:51.935153 kernel: landlock: Up and running. Jul 14 22:35:51.935163 kernel: SELinux: Initializing. Jul 14 22:35:51.935174 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:35:51.935187 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:35:51.935198 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 14 22:35:51.935208 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:35:51.935218 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:35:51.935229 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:35:51.935239 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 14 22:35:51.935249 kernel: ... version: 0 Jul 14 22:35:51.935259 kernel: ... bit width: 48 Jul 14 22:35:51.935273 kernel: ... generic registers: 6 Jul 14 22:35:51.935283 kernel: ... value mask: 0000ffffffffffff Jul 14 22:35:51.935293 kernel: ... max period: 00007fffffffffff Jul 14 22:35:51.935303 kernel: ... fixed-purpose events: 0 Jul 14 22:35:51.935313 kernel: ... event mask: 000000000000003f Jul 14 22:35:51.935323 kernel: signal: max sigframe size: 1776 Jul 14 22:35:51.935333 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:35:51.935344 kernel: rcu: Max phase no-delay instances is 400. Jul 14 22:35:51.935354 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:35:51.935364 kernel: smpboot: x86: Booting SMP configuration: Jul 14 22:35:51.935377 kernel: .... node #0, CPUs: #1 #2 #3 Jul 14 22:35:51.935387 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 22:35:51.935397 kernel: smpboot: Max logical packages: 1 Jul 14 22:35:51.935407 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 14 22:35:51.935417 kernel: devtmpfs: initialized Jul 14 22:35:51.935427 kernel: x86/mm: Memory block size: 128MB Jul 14 22:35:51.935438 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 14 22:35:51.935448 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 14 22:35:51.935458 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 14 22:35:51.935472 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 14 22:35:51.935482 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 14 22:35:51.935492 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:35:51.935502 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 22:35:51.935513 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:35:51.935523 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:35:51.935533 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:35:51.935544 kernel: audit: type=2000 audit(1752532551.186:1): state=initialized audit_enabled=0 res=1 Jul 14 22:35:51.935579 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:35:51.935590 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:35:51.935600 kernel: cpuidle: using governor menu Jul 14 22:35:51.935611 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:35:51.935621 kernel: dca service started, version 1.12.1 Jul 14 22:35:51.935631 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 14 22:35:51.935642 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 14 22:35:51.935652 kernel: PCI: Using configuration type 1 for base access Jul 14 22:35:51.935662 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:35:51.935675 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:35:51.935685 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 22:35:51.935695 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:35:51.935705 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 22:35:51.935727 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:35:51.935738 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:35:51.935747 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:35:51.935756 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:35:51.935766 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 22:35:51.935780 kernel: ACPI: Interpreter enabled Jul 14 22:35:51.935790 kernel: ACPI: PM: (supports S0 S3 S5) Jul 14 22:35:51.935800 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:35:51.935819 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:35:51.935839 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 22:35:51.935858 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 14 22:35:51.935877 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 22:35:51.936192 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:35:51.936364 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 14 22:35:51.936520 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 14 22:35:51.936534 kernel: PCI host bridge to bus 0000:00 Jul 14 22:35:51.936724 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:35:51.936878 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 14 22:35:51.937021 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:35:51.937180 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 14 22:35:51.937328 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:35:51.937469 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 14 22:35:51.937705 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 22:35:51.937977 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 14 22:35:51.938197 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 14 22:35:51.938382 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 14 22:35:51.938588 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 14 22:35:51.938781 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 14 22:35:51.938971 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 14 22:35:51.939169 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:35:51.939381 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 22:35:51.939619 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 14 22:35:51.939803 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 14 22:35:51.939978 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 14 22:35:51.940188 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 14 22:35:51.940380 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 14 22:35:51.940564 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 14 22:35:51.940753 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 14 22:35:51.940948 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 14 22:35:51.941131 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 14 22:35:51.941311 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 14 22:35:51.941508 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 14 22:35:51.941763 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 14 22:35:51.941950 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 14 22:35:51.942125 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 14 22:35:51.942315 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 14 22:35:51.942494 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 14 22:35:51.942687 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 14 22:35:51.942881 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 14 22:35:51.943047 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 14 22:35:51.943074 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 14 22:35:51.943085 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 14 22:35:51.943096 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:35:51.943107 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 14 22:35:51.943118 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 14 22:35:51.943134 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 14 22:35:51.943144 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 14 22:35:51.943155 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 14 22:35:51.943166 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 14 22:35:51.943177 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 14 22:35:51.943188 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 14 22:35:51.943199 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 14 22:35:51.943209 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 14 22:35:51.943220 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 14 22:35:51.943236 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 14 22:35:51.943246 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 14 22:35:51.943257 kernel: iommu: Default domain type: Translated Jul 14 22:35:51.943268 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:35:51.943278 kernel: efivars: Registered efivars operations Jul 14 22:35:51.943289 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:35:51.943300 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:35:51.943311 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 14 22:35:51.943322 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 14 22:35:51.943335 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 14 22:35:51.943343 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 14 22:35:51.943485 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 14 22:35:51.943681 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 14 22:35:51.943868 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:35:51.943885 kernel: vgaarb: loaded Jul 14 22:35:51.943897 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 14 22:35:51.943908 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 14 22:35:51.943925 kernel: clocksource: Switched to clocksource kvm-clock Jul 14 22:35:51.943936 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:35:51.943948 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:35:51.943959 kernel: pnp: PnP ACPI init Jul 14 22:35:51.944205 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 14 22:35:51.944223 kernel: pnp: PnP ACPI: found 6 devices Jul 14 22:35:51.944233 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:35:51.944242 kernel: NET: Registered PF_INET protocol family Jul 14 22:35:51.944252 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 22:35:51.944267 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 22:35:51.944277 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:35:51.944287 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:35:51.944297 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 22:35:51.944306 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 22:35:51.944316 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:35:51.944325 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:35:51.944335 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:35:51.944347 kernel: NET: Registered PF_XDP protocol family Jul 14 22:35:51.944517 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 14 22:35:51.944762 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 14 22:35:51.944896 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 14 22:35:51.945011 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 14 22:35:51.945138 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 14 22:35:51.945253 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 14 22:35:51.945367 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 14 22:35:51.945492 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 14 22:35:51.945507 kernel: PCI: CLS 0 bytes, default 64 Jul 14 22:35:51.945518 kernel: Initialise system trusted keyrings Jul 14 22:35:51.945528 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 22:35:51.945539 kernel: Key type asymmetric registered Jul 14 22:35:51.945613 kernel: Asymmetric key parser 'x509' registered Jul 14 22:35:51.945627 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 22:35:51.945638 kernel: io scheduler mq-deadline registered Jul 14 22:35:51.945649 kernel: io scheduler kyber registered Jul 14 22:35:51.945666 kernel: io scheduler bfq registered Jul 14 22:35:51.945677 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:35:51.945688 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 14 22:35:51.945699 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 14 22:35:51.945709 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 14 22:35:51.945720 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:35:51.945730 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:35:51.945741 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 14 22:35:51.945752 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:35:51.945768 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:35:51.945969 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 14 22:35:51.946148 kernel: rtc_cmos 00:04: registered as rtc0 Jul 14 22:35:51.946165 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Jul 14 22:35:51.946322 kernel: rtc_cmos 00:04: setting system clock to 2025-07-14T22:35:51 UTC (1752532551) Jul 14 22:35:51.946480 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 14 22:35:51.946497 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 14 22:35:51.946508 kernel: efifb: probing for efifb Jul 14 22:35:51.946524 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 14 22:35:51.946534 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 14 22:35:51.946546 kernel: efifb: scrolling: redraw Jul 14 22:35:51.946611 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 14 22:35:51.946623 kernel: Console: switching to colour frame buffer device 100x37 Jul 14 22:35:51.946635 kernel: fb0: EFI VGA frame buffer device Jul 14 22:35:51.946673 kernel: pstore: Using crash dump compression: deflate Jul 14 22:35:51.946688 kernel: pstore: Registered efi_pstore as persistent store backend Jul 14 22:35:51.946699 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:35:51.946714 kernel: Segment Routing with IPv6 Jul 14 22:35:51.946729 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:35:51.946740 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:35:51.946754 kernel: Key type dns_resolver registered Jul 14 22:35:51.946766 kernel: IPI shorthand broadcast: enabled Jul 14 22:35:51.946778 kernel: sched_clock: Marking stable (874004014, 112007783)->(1016044641, -30032844) Jul 14 22:35:51.946789 kernel: registered taskstats version 1 Jul 14 22:35:51.946801 kernel: Loading compiled-in X.509 certificates Jul 14 22:35:51.946812 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: ff10e110ca3923b510cf0133f4e9f48dd636b870' Jul 14 22:35:51.946828 kernel: Key type .fscrypt registered Jul 14 22:35:51.946839 kernel: Key type fscrypt-provisioning registered Jul 14 22:35:51.946850 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:35:51.946861 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:35:51.946872 kernel: ima: No architecture policies found Jul 14 22:35:51.946883 kernel: clk: Disabling unused clocks Jul 14 22:35:51.946894 kernel: Freeing unused kernel image (initmem) memory: 42876K Jul 14 22:35:51.946906 kernel: Write protecting the kernel read-only data: 36864k Jul 14 22:35:51.946917 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 14 22:35:51.946933 kernel: Run /init as init process Jul 14 22:35:51.946944 kernel: with arguments: Jul 14 22:35:51.946955 kernel: /init Jul 14 22:35:51.946966 kernel: with environment: Jul 14 22:35:51.946976 kernel: HOME=/ Jul 14 22:35:51.946987 kernel: TERM=linux Jul 14 22:35:51.946999 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:35:51.947013 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:35:51.947031 systemd[1]: Detected virtualization kvm. Jul 14 22:35:51.947043 systemd[1]: Detected architecture x86-64. Jul 14 22:35:51.947065 systemd[1]: Running in initrd. Jul 14 22:35:51.947078 systemd[1]: No hostname configured, using default hostname. Jul 14 22:35:51.947089 systemd[1]: Hostname set to . Jul 14 22:35:51.947109 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:35:51.947120 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:35:51.947132 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:35:51.947144 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:35:51.947157 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 22:35:51.947169 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:35:51.947181 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 22:35:51.947198 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 22:35:51.947212 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 22:35:51.947224 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 22:35:51.947236 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:35:51.947247 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:35:51.947259 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:35:51.947271 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:35:51.947286 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:35:51.947298 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:35:51.947310 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:35:51.947322 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:35:51.947334 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:35:51.947346 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:35:51.947358 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:35:51.947370 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:35:51.947382 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:35:51.947398 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:35:51.947410 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 22:35:51.947422 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:35:51.947433 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 22:35:51.947445 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:35:51.947457 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:35:51.947469 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:35:51.947481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:35:51.947497 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 22:35:51.947508 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:35:51.947520 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:35:51.947533 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:35:51.947600 systemd-journald[193]: Collecting audit messages is disabled. Jul 14 22:35:51.947635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:35:51.947648 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:35:51.947659 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:35:51.947672 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:35:51.947687 systemd-journald[193]: Journal started Jul 14 22:35:51.947711 systemd-journald[193]: Runtime Journal (/run/log/journal/e16f87b9d5434c5caafb6291774fe8e3) is 6.0M, max 48.3M, 42.2M free. Jul 14 22:35:51.940301 systemd-modules-load[194]: Inserted module 'overlay' Jul 14 22:35:51.949637 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:35:51.959602 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:35:51.966047 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:35:51.971302 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:35:51.973956 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:35:51.974587 kernel: Bridge firewalling registered Jul 14 22:35:51.974605 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 14 22:35:51.983702 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 22:35:51.985882 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:35:51.988432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:35:51.992636 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:35:51.995610 dracut-cmdline[222]: dracut-dracut-053 Jul 14 22:35:51.999707 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:35:52.012836 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:35:52.023752 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:35:52.061274 systemd-resolved[250]: Positive Trust Anchors: Jul 14 22:35:52.061295 systemd-resolved[250]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:35:52.061339 systemd-resolved[250]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:35:52.064156 systemd-resolved[250]: Defaulting to hostname 'linux'. Jul 14 22:35:52.065467 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:35:52.071165 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:35:52.113585 kernel: SCSI subsystem initialized Jul 14 22:35:52.124582 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:35:52.135587 kernel: iscsi: registered transport (tcp) Jul 14 22:35:52.158587 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:35:52.158652 kernel: QLogic iSCSI HBA Driver Jul 14 22:35:52.210268 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 22:35:52.219838 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 22:35:52.244775 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:35:52.244870 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:35:52.244883 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 22:35:52.286589 kernel: raid6: avx2x4 gen() 30016 MB/s Jul 14 22:35:52.303583 kernel: raid6: avx2x2 gen() 30631 MB/s Jul 14 22:35:52.320631 kernel: raid6: avx2x1 gen() 25469 MB/s Jul 14 22:35:52.320658 kernel: raid6: using algorithm avx2x2 gen() 30631 MB/s Jul 14 22:35:52.338699 kernel: raid6: .... xor() 19502 MB/s, rmw enabled Jul 14 22:35:52.338736 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:35:52.359589 kernel: xor: automatically using best checksumming function avx Jul 14 22:35:52.514607 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 22:35:52.527620 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:35:52.539824 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:35:52.552386 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jul 14 22:35:52.556757 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:35:52.559428 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 22:35:52.577547 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jul 14 22:35:52.613162 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:35:52.626773 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:35:52.701703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:35:52.714786 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 22:35:52.733910 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 22:35:52.739581 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 14 22:35:52.740654 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:35:52.742470 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:35:52.745297 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:35:52.752586 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:35:52.756771 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 22:35:52.764478 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 22:35:52.769592 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 22:35:52.770575 kernel: AES CTR mode by8 optimization enabled Jul 14 22:35:52.774456 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:35:52.783825 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 22:35:52.783859 kernel: GPT:9289727 != 19775487 Jul 14 22:35:52.783875 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 22:35:52.783890 kernel: GPT:9289727 != 19775487 Jul 14 22:35:52.783904 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 22:35:52.783920 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:35:52.774921 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:35:52.786047 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:35:52.787865 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:35:52.788151 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:35:52.789770 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:35:52.801575 kernel: libata version 3.00 loaded. Jul 14 22:35:52.805000 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:35:52.809275 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:35:52.817736 kernel: BTRFS: device fsid d23b6972-ad36-4741-bf36-4d440b923127 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (461) Jul 14 22:35:52.817765 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Jul 14 22:35:52.817780 kernel: ahci 0000:00:1f.2: version 3.0 Jul 14 22:35:52.821610 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 14 22:35:52.821650 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 14 22:35:52.821869 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 14 22:35:52.832628 kernel: scsi host0: ahci Jul 14 22:35:52.834683 kernel: scsi host1: ahci Jul 14 22:35:52.834887 kernel: scsi host2: ahci Jul 14 22:35:52.835575 kernel: scsi host3: ahci Jul 14 22:35:52.837590 kernel: scsi host4: ahci Jul 14 22:35:52.839295 kernel: scsi host5: ahci Jul 14 22:35:52.839527 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 14 22:35:52.839580 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 14 22:35:52.841013 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 14 22:35:52.841050 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 14 22:35:52.842724 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 14 22:35:52.843864 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:35:52.847304 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 14 22:35:52.853884 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 22:35:52.864871 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 22:35:52.871070 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 22:35:52.873615 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 22:35:52.881246 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:35:52.894689 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 22:35:52.898046 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:35:52.903471 disk-uuid[556]: Primary Header is updated. Jul 14 22:35:52.903471 disk-uuid[556]: Secondary Entries is updated. Jul 14 22:35:52.903471 disk-uuid[556]: Secondary Header is updated. Jul 14 22:35:52.907600 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:35:52.911570 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:35:52.916597 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:35:52.926227 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:35:53.158603 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 14 22:35:53.158708 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 14 22:35:53.161185 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 14 22:35:53.161290 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 14 22:35:53.161310 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 14 22:35:53.162605 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 14 22:35:53.163603 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 14 22:35:53.163632 kernel: ata3.00: applying bridge limits Jul 14 22:35:53.164657 kernel: ata3.00: configured for UDMA/100 Jul 14 22:35:53.165606 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 14 22:35:53.220617 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 14 22:35:53.220955 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:35:53.234583 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:35:54.010607 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:35:54.011122 disk-uuid[559]: The operation has completed successfully. Jul 14 22:35:54.041405 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:35:54.041541 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 22:35:54.066806 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 22:35:54.072586 sh[597]: Success Jul 14 22:35:54.086595 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 14 22:35:54.123330 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 22:35:54.138495 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 22:35:54.141241 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 22:35:54.155173 kernel: BTRFS info (device dm-0): first mount of filesystem d23b6972-ad36-4741-bf36-4d440b923127 Jul 14 22:35:54.155217 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:35:54.155229 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 22:35:54.156472 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 22:35:54.157323 kernel: BTRFS info (device dm-0): using free space tree Jul 14 22:35:54.163109 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 22:35:54.164110 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 22:35:54.177943 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 22:35:54.181051 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 22:35:54.190871 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:35:54.190904 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:35:54.190915 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:35:54.194606 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:35:54.204884 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:35:54.206602 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:35:54.299372 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:35:54.331757 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:35:54.354839 systemd-networkd[775]: lo: Link UP Jul 14 22:35:54.354850 systemd-networkd[775]: lo: Gained carrier Jul 14 22:35:54.356497 systemd-networkd[775]: Enumeration completed Jul 14 22:35:54.356638 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:35:54.356908 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:35:54.356912 systemd-networkd[775]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:35:54.361624 systemd-networkd[775]: eth0: Link UP Jul 14 22:35:54.361628 systemd-networkd[775]: eth0: Gained carrier Jul 14 22:35:54.361635 systemd-networkd[775]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:35:54.363178 systemd[1]: Reached target network.target - Network. Jul 14 22:35:54.403630 systemd-networkd[775]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:35:54.659932 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 22:35:54.678768 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 22:35:54.721219 systemd-resolved[250]: Detected conflict on linux IN A 10.0.0.12 Jul 14 22:35:54.721238 systemd-resolved[250]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Jul 14 22:35:54.737692 ignition[780]: Ignition 2.19.0 Jul 14 22:35:54.737706 ignition[780]: Stage: fetch-offline Jul 14 22:35:54.737766 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:35:54.737778 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:35:54.737887 ignition[780]: parsed url from cmdline: "" Jul 14 22:35:54.737891 ignition[780]: no config URL provided Jul 14 22:35:54.737897 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:35:54.737908 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:35:54.737941 ignition[780]: op(1): [started] loading QEMU firmware config module Jul 14 22:35:54.737946 ignition[780]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 22:35:54.746110 ignition[780]: op(1): [finished] loading QEMU firmware config module Jul 14 22:35:54.784039 ignition[780]: parsing config with SHA512: d7a877696a841a9a8baaf8349a8986112f0d8c30ce1c62673c2d51b6c54553f9d5384876bf55475cc43aaa10125c6751b857b289b41a0ed0d7082193bc01f8ad Jul 14 22:35:54.790136 unknown[780]: fetched base config from "system" Jul 14 22:35:54.790713 unknown[780]: fetched user config from "qemu" Jul 14 22:35:54.791309 ignition[780]: fetch-offline: fetch-offline passed Jul 14 22:35:54.791442 ignition[780]: Ignition finished successfully Jul 14 22:35:54.795917 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:35:54.797344 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:35:54.808698 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 22:35:54.824135 ignition[789]: Ignition 2.19.0 Jul 14 22:35:54.824147 ignition[789]: Stage: kargs Jul 14 22:35:54.824356 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:35:54.824370 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:35:54.825312 ignition[789]: kargs: kargs passed Jul 14 22:35:54.825362 ignition[789]: Ignition finished successfully Jul 14 22:35:54.829139 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 22:35:54.846720 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 22:35:54.860745 ignition[797]: Ignition 2.19.0 Jul 14 22:35:54.863760 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 22:35:54.860753 ignition[797]: Stage: disks Jul 14 22:35:54.864991 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 22:35:54.860942 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:35:54.866164 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:35:54.860955 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:35:54.866755 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:35:54.861759 ignition[797]: disks: disks passed Jul 14 22:35:54.867076 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:35:54.861804 ignition[797]: Ignition finished successfully Jul 14 22:35:54.867394 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:35:54.868867 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 22:35:54.889213 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 22:35:55.119491 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 22:35:55.132721 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 22:35:55.226610 kernel: EXT4-fs (vda9): mounted filesystem dda007d3-640b-4d11-976f-3b761ca7aabd r/w with ordered data mode. Quota mode: none. Jul 14 22:35:55.227282 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 22:35:55.229596 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 22:35:55.245710 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:35:55.248850 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 22:35:55.252489 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 22:35:55.254644 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:35:55.254724 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:35:55.276953 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Jul 14 22:35:55.276983 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:35:55.276995 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:35:55.277006 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:35:55.278217 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 22:35:55.280439 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:35:55.281530 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:35:55.295773 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 22:35:55.361409 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:35:55.372251 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:35:55.377069 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:35:55.381172 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:35:55.463598 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 22:35:55.472651 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 22:35:55.487906 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 22:35:55.494502 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 22:35:55.495809 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:35:55.519296 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 22:35:55.521642 systemd-networkd[775]: eth0: Gained IPv6LL Jul 14 22:35:55.534303 ignition[934]: INFO : Ignition 2.19.0 Jul 14 22:35:55.534303 ignition[934]: INFO : Stage: mount Jul 14 22:35:55.536025 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:35:55.536025 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:35:55.538191 ignition[934]: INFO : mount: mount passed Jul 14 22:35:55.538191 ignition[934]: INFO : Ignition finished successfully Jul 14 22:35:55.541907 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 22:35:55.553785 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 22:35:56.244861 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:35:56.254490 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (944) Jul 14 22:35:56.254571 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:35:56.254583 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:35:56.256163 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:35:56.259587 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:35:56.260817 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:35:56.289919 ignition[961]: INFO : Ignition 2.19.0 Jul 14 22:35:56.289919 ignition[961]: INFO : Stage: files Jul 14 22:35:56.291865 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:35:56.291865 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:35:56.294375 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:35:56.294375 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:35:56.294375 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:35:56.299183 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:35:56.299183 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:35:56.299183 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:35:56.298135 unknown[961]: wrote ssh authorized keys file for user: core Jul 14 22:35:56.305405 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 14 22:35:56.305405 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 14 22:36:06.367714 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 14 22:36:06.565319 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 14 22:36:06.565319 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:36:06.569208 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 14 22:36:07.269226 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 14 22:36:07.624673 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 14 22:36:07.624673 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 14 22:36:07.628624 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:36:07.628624 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:36:07.628624 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 14 22:36:07.628624 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 14 22:36:07.628624 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:36:07.628624 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:36:07.628624 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 14 22:36:07.628624 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:36:07.650993 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:36:07.657272 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:36:07.658853 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:36:07.658853 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:36:07.658853 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:36:07.658853 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:36:07.658853 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:36:07.658853 ignition[961]: INFO : files: files passed Jul 14 22:36:07.658853 ignition[961]: INFO : Ignition finished successfully Jul 14 22:36:07.660458 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:36:07.669845 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:36:07.671911 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:36:07.674270 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:36:07.674417 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:36:07.683208 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 22:36:07.685914 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:36:07.687776 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:36:07.690735 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:36:07.688420 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:36:07.691259 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:36:07.704704 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:36:07.731773 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:36:07.731912 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:36:07.734236 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:36:07.736222 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:36:07.736854 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:36:07.737627 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:36:07.755805 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:36:07.766697 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:36:07.777768 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:36:07.779007 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:36:07.781233 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:36:07.783201 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:36:07.783312 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:36:07.785461 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:36:07.787165 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:36:07.789132 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:36:07.791135 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:36:07.793114 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:36:07.795219 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:36:07.797382 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:36:07.799861 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:36:07.802128 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:36:07.804310 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:36:07.806073 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:36:07.806190 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:36:07.808292 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:36:07.809891 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:36:07.811935 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:36:07.812040 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:36:07.814120 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:36:07.814234 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:36:07.816396 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:36:07.816510 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:36:07.818466 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:36:07.820173 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:36:07.823654 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:36:07.825660 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:36:07.827546 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:36:07.829278 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:36:07.829380 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:36:07.831232 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:36:07.831325 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:36:07.833639 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:36:07.833757 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:36:07.835639 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:36:07.835749 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:36:07.850742 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:36:07.852440 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:36:07.853448 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:36:07.853604 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:36:07.856789 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:36:07.856910 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:36:07.862159 ignition[1017]: INFO : Ignition 2.19.0 Jul 14 22:36:07.862159 ignition[1017]: INFO : Stage: umount Jul 14 22:36:07.862397 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:36:07.866385 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:36:07.866385 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:36:07.866385 ignition[1017]: INFO : umount: umount passed Jul 14 22:36:07.866385 ignition[1017]: INFO : Ignition finished successfully Jul 14 22:36:07.862520 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:36:07.866929 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:36:07.867062 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:36:07.868972 systemd[1]: Stopped target network.target - Network. Jul 14 22:36:07.870459 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:36:07.870529 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:36:07.872392 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:36:07.872448 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:36:07.874328 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:36:07.874378 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:36:07.876653 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:36:07.876722 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:36:07.878896 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:36:07.880745 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:36:07.883627 systemd-networkd[775]: eth0: DHCPv6 lease lost Jul 14 22:36:07.884039 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:36:07.887693 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:36:07.887855 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:36:07.890784 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:36:07.890975 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:36:07.894139 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:36:07.894327 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:36:07.898038 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:36:07.898107 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:36:07.900017 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:36:07.900088 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:36:07.907838 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:36:07.908834 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:36:07.908927 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:36:07.911329 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:36:07.911381 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:36:07.914010 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:36:07.914065 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:36:07.915358 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:36:07.915411 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:36:07.917991 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:36:07.928142 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:36:07.928331 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:36:07.930381 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:36:07.930629 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:36:07.933324 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:36:07.933423 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:36:07.935066 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:36:07.935127 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:36:07.937312 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:36:07.937368 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:36:07.939823 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:36:07.939891 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:36:07.941805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:36:07.941872 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:36:07.952746 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:36:07.953233 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:36:07.953300 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:36:07.953615 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 14 22:36:07.953664 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:36:07.953922 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:36:07.953978 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:36:07.954239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:36:07.954285 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:36:07.962334 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:36:07.962508 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:36:07.964757 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:36:07.967415 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:36:07.979672 systemd[1]: Switching root. Jul 14 22:36:08.008647 systemd-journald[193]: Journal stopped Jul 14 22:36:09.680176 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 14 22:36:09.680242 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:36:09.680257 kernel: SELinux: policy capability open_perms=1 Jul 14 22:36:09.680273 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:36:09.680285 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:36:09.680297 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:36:09.680309 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:36:09.680320 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:36:09.680336 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:36:09.680347 kernel: audit: type=1403 audit(1752532568.876:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:36:09.680363 systemd[1]: Successfully loaded SELinux policy in 40.587ms. Jul 14 22:36:09.680386 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.836ms. Jul 14 22:36:09.680404 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:36:09.680416 systemd[1]: Detected virtualization kvm. Jul 14 22:36:09.680428 systemd[1]: Detected architecture x86-64. Jul 14 22:36:09.680440 systemd[1]: Detected first boot. Jul 14 22:36:09.680452 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:36:09.680467 zram_generator::config[1062]: No configuration found. Jul 14 22:36:09.680481 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:36:09.680493 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 14 22:36:09.680505 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 14 22:36:09.680518 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 14 22:36:09.680541 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 22:36:09.680580 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 22:36:09.680594 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 22:36:09.680610 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 22:36:09.680622 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 22:36:09.680634 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 22:36:09.680647 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 22:36:09.680659 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 22:36:09.680671 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:36:09.680683 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:36:09.680695 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 22:36:09.680708 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 22:36:09.680723 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 22:36:09.680735 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:36:09.680748 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 22:36:09.680760 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:36:09.680772 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 14 22:36:09.680785 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 14 22:36:09.680797 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 14 22:36:09.680812 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 22:36:09.680824 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:36:09.680837 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:36:09.680849 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:36:09.680861 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:36:09.680873 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 22:36:09.680885 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 22:36:09.680898 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:36:09.680910 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:36:09.680922 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:36:09.680937 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 22:36:09.680949 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 22:36:09.680966 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 22:36:09.680978 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 22:36:09.680990 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:36:09.681002 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 22:36:09.681015 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 22:36:09.681028 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 22:36:09.681043 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:36:09.681055 systemd[1]: Reached target machines.target - Containers. Jul 14 22:36:09.681068 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 22:36:09.681080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:36:09.681092 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:36:09.681105 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:36:09.681117 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:36:09.681129 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:36:09.681141 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:36:09.681155 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:36:09.681167 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:36:09.681179 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:36:09.681192 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 14 22:36:09.681206 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 14 22:36:09.681218 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 14 22:36:09.681230 systemd[1]: Stopped systemd-fsck-usr.service. Jul 14 22:36:09.681244 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:36:09.681259 kernel: fuse: init (API version 7.39) Jul 14 22:36:09.681271 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:36:09.681282 kernel: loop: module loaded Jul 14 22:36:09.681294 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:36:09.681307 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 22:36:09.681319 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:36:09.681332 systemd[1]: verity-setup.service: Deactivated successfully. Jul 14 22:36:09.681343 systemd[1]: Stopped verity-setup.service. Jul 14 22:36:09.681356 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:36:09.681372 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 22:36:09.681384 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 22:36:09.681396 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 22:36:09.681408 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 22:36:09.681420 kernel: ACPI: bus type drm_connector registered Jul 14 22:36:09.681434 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 22:36:09.681463 systemd-journald[1136]: Collecting audit messages is disabled. Jul 14 22:36:09.681485 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 22:36:09.681497 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 22:36:09.681509 systemd-journald[1136]: Journal started Jul 14 22:36:09.681542 systemd-journald[1136]: Runtime Journal (/run/log/journal/e16f87b9d5434c5caafb6291774fe8e3) is 6.0M, max 48.3M, 42.2M free. Jul 14 22:36:09.421855 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:36:09.444259 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 22:36:09.444900 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 14 22:36:09.684572 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:36:09.685773 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:36:09.687319 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:36:09.687517 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:36:09.689005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:36:09.689178 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:36:09.690764 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:36:09.690953 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:36:09.692389 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:36:09.692636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:36:09.694226 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:36:09.694432 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:36:09.695902 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:36:09.696114 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:36:09.697626 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:36:09.699106 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:36:09.700623 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 22:36:09.715075 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:36:09.725660 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 22:36:09.728207 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 22:36:09.729475 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:36:09.729508 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:36:09.731927 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 22:36:09.734567 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 22:36:09.737036 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 22:36:09.738197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:36:09.742880 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 22:36:09.746429 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 22:36:09.748768 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:36:09.751567 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 22:36:09.752974 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:36:09.760755 systemd-journald[1136]: Time spent on flushing to /var/log/journal/e16f87b9d5434c5caafb6291774fe8e3 is 14.798ms for 991 entries. Jul 14 22:36:09.760755 systemd-journald[1136]: System Journal (/var/log/journal/e16f87b9d5434c5caafb6291774fe8e3) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:36:09.792489 systemd-journald[1136]: Received client request to flush runtime journal. Jul 14 22:36:09.792522 kernel: loop0: detected capacity change from 0 to 229808 Jul 14 22:36:09.757683 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:36:09.761814 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 22:36:09.766767 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:36:09.770424 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 22:36:09.771809 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 22:36:09.773748 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 22:36:09.775700 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 22:36:09.783071 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 22:36:09.798728 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 22:36:09.800635 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 22:36:09.812150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:36:09.819171 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:36:09.820673 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:36:09.825290 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 14 22:36:09.825309 systemd-tmpfiles[1177]: ACLs are not supported, ignoring. Jul 14 22:36:09.830165 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 22:36:09.832496 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:36:09.836829 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 22:36:09.838673 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:36:09.839277 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 22:36:09.846439 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 22:36:09.852574 kernel: loop1: detected capacity change from 0 to 142488 Jul 14 22:36:09.869765 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 22:36:09.876810 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:36:09.884583 kernel: loop2: detected capacity change from 0 to 140768 Jul 14 22:36:09.896517 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jul 14 22:36:09.896563 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Jul 14 22:36:09.904690 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:36:09.931762 kernel: loop3: detected capacity change from 0 to 229808 Jul 14 22:36:09.942622 kernel: loop4: detected capacity change from 0 to 142488 Jul 14 22:36:09.956065 kernel: loop5: detected capacity change from 0 to 140768 Jul 14 22:36:09.967085 (sd-merge)[1203]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 22:36:09.967883 (sd-merge)[1203]: Merged extensions into '/usr'. Jul 14 22:36:09.971981 systemd[1]: Reloading requested from client PID 1176 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 22:36:09.972002 systemd[1]: Reloading... Jul 14 22:36:10.044607 zram_generator::config[1232]: No configuration found. Jul 14 22:36:10.117436 ldconfig[1171]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:36:10.179477 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:36:10.235713 systemd[1]: Reloading finished in 263 ms. Jul 14 22:36:10.278845 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 22:36:10.280712 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 22:36:10.299812 systemd[1]: Starting ensure-sysext.service... Jul 14 22:36:10.302289 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:36:10.309981 systemd[1]: Reloading requested from client PID 1266 ('systemctl') (unit ensure-sysext.service)... Jul 14 22:36:10.309998 systemd[1]: Reloading... Jul 14 22:36:10.356971 zram_generator::config[1294]: No configuration found. Jul 14 22:36:10.358013 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:36:10.358387 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 22:36:10.359477 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:36:10.359814 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jul 14 22:36:10.359896 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Jul 14 22:36:10.363805 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:36:10.363899 systemd-tmpfiles[1267]: Skipping /boot Jul 14 22:36:10.375636 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:36:10.375650 systemd-tmpfiles[1267]: Skipping /boot Jul 14 22:36:10.470502 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:36:10.521527 systemd[1]: Reloading finished in 211 ms. Jul 14 22:36:10.539338 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 22:36:10.552441 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:36:10.560075 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:36:10.562713 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 22:36:10.565460 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 22:36:10.570724 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:36:10.575191 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:36:10.583748 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 22:36:10.588890 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:36:10.589075 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:36:10.590839 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:36:10.593907 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:36:10.596279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:36:10.597632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:36:10.603674 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 22:36:10.604861 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:36:10.605901 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:36:10.606291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:36:10.612514 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:36:10.612825 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:36:10.615090 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:36:10.615455 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:36:10.617452 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 22:36:10.618284 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Jul 14 22:36:10.624025 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 22:36:10.629663 augenrules[1361]: No rules Jul 14 22:36:10.631063 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:36:10.637924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:36:10.638510 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:36:10.643941 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:36:10.647517 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:36:10.651892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:36:10.656939 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:36:10.658179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:36:10.660988 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 22:36:10.662119 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:36:10.663225 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:36:10.665119 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 22:36:10.667150 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:36:10.668664 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:36:10.675027 systemd[1]: Finished ensure-sysext.service. Jul 14 22:36:10.679312 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:36:10.680022 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:36:10.684465 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:36:10.684720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:36:10.686298 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:36:10.686546 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:36:10.695468 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 22:36:10.711790 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:36:10.713243 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:36:10.713350 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:36:10.718730 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 22:36:10.720199 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:36:10.721727 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 22:36:10.734583 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1391) Jul 14 22:36:10.758104 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 14 22:36:10.768464 systemd-resolved[1336]: Positive Trust Anchors: Jul 14 22:36:10.768489 systemd-resolved[1336]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:36:10.768530 systemd-resolved[1336]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:36:10.772583 systemd-resolved[1336]: Defaulting to hostname 'linux'. Jul 14 22:36:10.773974 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:36:10.776045 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:36:10.777673 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:36:10.786614 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 14 22:36:10.787964 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 22:36:10.791626 kernel: ACPI: button: Power Button [PWRF] Jul 14 22:36:10.824128 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 14 22:36:10.824436 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 14 22:36:10.824688 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 14 22:36:10.824893 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 14 22:36:10.825855 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 22:36:10.827877 systemd-networkd[1403]: lo: Link UP Jul 14 22:36:10.828162 systemd-networkd[1403]: lo: Gained carrier Jul 14 22:36:10.830652 systemd-networkd[1403]: Enumeration completed Jul 14 22:36:10.831161 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:36:10.831171 systemd-networkd[1403]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:36:10.831687 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:36:10.833417 systemd[1]: Reached target network.target - Network. Jul 14 22:36:10.836849 systemd-networkd[1403]: eth0: Link UP Jul 14 22:36:10.836858 systemd-networkd[1403]: eth0: Gained carrier Jul 14 22:36:10.836872 systemd-networkd[1403]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:36:10.842022 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 22:36:10.848618 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 22:36:10.850379 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 22:36:10.853650 systemd-networkd[1403]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:36:10.854469 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Jul 14 22:36:11.282093 systemd-resolved[1336]: Clock change detected. Flushing caches. Jul 14 22:36:11.282244 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:36:11.282289 systemd-timesyncd[1404]: Initial clock synchronization to Mon 2025-07-14 22:36:11.282045 UTC. Jul 14 22:36:11.288010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:36:11.289283 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 14 22:36:11.301472 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 22:36:11.303605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:36:11.303902 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:36:11.324555 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:36:11.396500 kernel: kvm_amd: TSC scaling supported Jul 14 22:36:11.396567 kernel: kvm_amd: Nested Virtualization enabled Jul 14 22:36:11.396581 kernel: kvm_amd: Nested Paging enabled Jul 14 22:36:11.397661 kernel: kvm_amd: LBR virtualization supported Jul 14 22:36:11.397694 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 14 22:36:11.398797 kernel: kvm_amd: Virtual GIF supported Jul 14 22:36:11.420539 kernel: EDAC MC: Ver: 3.0.0 Jul 14 22:36:11.434638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:36:11.456467 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 22:36:11.471866 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 22:36:11.482097 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:36:11.514816 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 22:36:11.516477 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:36:11.517597 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:36:11.518805 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 22:36:11.520105 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 22:36:11.521585 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 22:36:11.523008 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 22:36:11.524252 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 22:36:11.525494 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:36:11.525520 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:36:11.526430 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:36:11.528489 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 22:36:11.531249 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 22:36:11.549563 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 22:36:11.552663 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 22:36:11.554586 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 22:36:11.555991 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:36:11.557122 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:36:11.558267 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:36:11.558305 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:36:11.559714 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 22:36:11.562513 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 22:36:11.564580 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:36:11.566743 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 22:36:11.570969 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 22:36:11.573517 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 22:36:11.576653 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 22:36:11.579391 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 22:36:11.580882 jq[1442]: false Jul 14 22:36:11.582834 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 22:36:11.587683 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 22:36:11.592606 extend-filesystems[1443]: Found loop3 Jul 14 22:36:11.593706 extend-filesystems[1443]: Found loop4 Jul 14 22:36:11.593706 extend-filesystems[1443]: Found loop5 Jul 14 22:36:11.593706 extend-filesystems[1443]: Found sr0 Jul 14 22:36:11.593706 extend-filesystems[1443]: Found vda Jul 14 22:36:11.593706 extend-filesystems[1443]: Found vda1 Jul 14 22:36:11.593706 extend-filesystems[1443]: Found vda2 Jul 14 22:36:11.593706 extend-filesystems[1443]: Found vda3 Jul 14 22:36:11.593706 extend-filesystems[1443]: Found usr Jul 14 22:36:11.593706 extend-filesystems[1443]: Found vda4 Jul 14 22:36:11.593706 extend-filesystems[1443]: Found vda6 Jul 14 22:36:11.593706 extend-filesystems[1443]: Found vda7 Jul 14 22:36:11.593706 extend-filesystems[1443]: Found vda9 Jul 14 22:36:11.593706 extend-filesystems[1443]: Checking size of /dev/vda9 Jul 14 22:36:11.593616 dbus-daemon[1441]: [system] SELinux support is enabled Jul 14 22:36:11.606617 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 22:36:11.609274 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:36:11.609901 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:36:11.615685 extend-filesystems[1443]: Resized partition /dev/vda9 Jul 14 22:36:11.616914 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 22:36:11.619759 extend-filesystems[1463]: resize2fs 1.47.1 (20-May-2024) Jul 14 22:36:11.621233 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 22:36:11.623767 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 22:36:11.625513 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:36:11.631490 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1376) Jul 14 22:36:11.632946 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 22:36:11.636074 jq[1464]: true Jul 14 22:36:11.635892 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:36:11.637646 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 22:36:11.638032 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:36:11.638228 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 22:36:11.640268 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:36:11.640516 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 22:36:11.654724 jq[1468]: true Jul 14 22:36:11.659727 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 22:36:11.802292 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:36:11.802666 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 22:36:11.804627 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:36:11.804648 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 22:36:11.811687 update_engine[1462]: I20250714 22:36:11.811430 1462 main.cc:92] Flatcar Update Engine starting Jul 14 22:36:11.812773 systemd[1]: Started update-engine.service - Update Engine. Jul 14 22:36:11.812852 update_engine[1462]: I20250714 22:36:11.812798 1462 update_check_scheduler.cc:74] Next update check in 10m23s Jul 14 22:36:11.825612 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 22:36:11.947390 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:36:11.980315 locksmithd[1494]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:36:11.983283 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 22:36:11.986179 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 22:36:11.986200 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 22:36:11.988317 systemd-logind[1452]: New seat seat0. Jul 14 22:36:11.990735 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 22:36:11.992102 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 22:36:11.996698 tar[1467]: linux-amd64/LICENSE Jul 14 22:36:11.997803 tar[1467]: linux-amd64/helm Jul 14 22:36:12.000944 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:36:12.001231 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 22:36:12.019888 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 22:36:12.029473 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:36:12.040030 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 22:36:12.049757 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 22:36:12.052972 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 22:36:12.055130 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 22:36:12.078379 extend-filesystems[1463]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:36:12.078379 extend-filesystems[1463]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:36:12.078379 extend-filesystems[1463]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:36:12.082638 extend-filesystems[1443]: Resized filesystem in /dev/vda9 Jul 14 22:36:12.084476 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:36:12.084720 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 22:36:12.095474 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:36:12.098304 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 22:36:12.100910 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 22:36:12.201569 containerd[1471]: time="2025-07-14T22:36:12.200860025Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 22:36:12.224666 containerd[1471]: time="2025-07-14T22:36:12.224553001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:36:12.226829 containerd[1471]: time="2025-07-14T22:36:12.226772712Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:36:12.226829 containerd[1471]: time="2025-07-14T22:36:12.226815282Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:36:12.226888 containerd[1471]: time="2025-07-14T22:36:12.226838365Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:36:12.227092 containerd[1471]: time="2025-07-14T22:36:12.227051184Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 22:36:12.227092 containerd[1471]: time="2025-07-14T22:36:12.227085028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 22:36:12.227243 containerd[1471]: time="2025-07-14T22:36:12.227198100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:36:12.227243 containerd[1471]: time="2025-07-14T22:36:12.227220923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:36:12.227548 containerd[1471]: time="2025-07-14T22:36:12.227501469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:36:12.227548 containerd[1471]: time="2025-07-14T22:36:12.227530373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:36:12.227594 containerd[1471]: time="2025-07-14T22:36:12.227548657Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:36:12.227594 containerd[1471]: time="2025-07-14T22:36:12.227561040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:36:12.227692 containerd[1471]: time="2025-07-14T22:36:12.227666869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:36:12.227989 containerd[1471]: time="2025-07-14T22:36:12.227949999Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:36:12.228162 containerd[1471]: time="2025-07-14T22:36:12.228127903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:36:12.228162 containerd[1471]: time="2025-07-14T22:36:12.228149463Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:36:12.228336 containerd[1471]: time="2025-07-14T22:36:12.228300757Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:36:12.228413 containerd[1471]: time="2025-07-14T22:36:12.228388672Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:36:12.237639 containerd[1471]: time="2025-07-14T22:36:12.237575474Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:36:12.237685 containerd[1471]: time="2025-07-14T22:36:12.237655925Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:36:12.237706 containerd[1471]: time="2025-07-14T22:36:12.237681072Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 22:36:12.237706 containerd[1471]: time="2025-07-14T22:36:12.237702221Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 22:36:12.237763 containerd[1471]: time="2025-07-14T22:36:12.237724373Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:36:12.237966 containerd[1471]: time="2025-07-14T22:36:12.237909660Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:36:12.238238 containerd[1471]: time="2025-07-14T22:36:12.238193061Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:36:12.238383 containerd[1471]: time="2025-07-14T22:36:12.238341069Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 22:36:12.238383 containerd[1471]: time="2025-07-14T22:36:12.238375153Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 22:36:12.238470 containerd[1471]: time="2025-07-14T22:36:12.238389540Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 22:36:12.238470 containerd[1471]: time="2025-07-14T22:36:12.238404488Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:36:12.238470 containerd[1471]: time="2025-07-14T22:36:12.238427461Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:36:12.238470 containerd[1471]: time="2025-07-14T22:36:12.238443711Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:36:12.238604 containerd[1471]: time="2025-07-14T22:36:12.238477344Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:36:12.238604 containerd[1471]: time="2025-07-14T22:36:12.238497141Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:36:12.238604 containerd[1471]: time="2025-07-14T22:36:12.238514694Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:36:12.238604 containerd[1471]: time="2025-07-14T22:36:12.238533239Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:36:12.238604 containerd[1471]: time="2025-07-14T22:36:12.238550602Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:36:12.238604 containerd[1471]: time="2025-07-14T22:36:12.238578845Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238604 containerd[1471]: time="2025-07-14T22:36:12.238599063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238616776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238634719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238651210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238669264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238684753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238701535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238718076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238734597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238745938Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238756999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238776665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.238803 containerd[1471]: time="2025-07-14T22:36:12.238800580Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 22:36:12.239142 containerd[1471]: time="2025-07-14T22:36:12.238828433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.239142 containerd[1471]: time="2025-07-14T22:36:12.238846867Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.239142 containerd[1471]: time="2025-07-14T22:36:12.238861805Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:36:12.239142 containerd[1471]: time="2025-07-14T22:36:12.238935704Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:36:12.239142 containerd[1471]: time="2025-07-14T22:36:12.238959879Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 22:36:12.239142 containerd[1471]: time="2025-07-14T22:36:12.238973575Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:36:12.239142 containerd[1471]: time="2025-07-14T22:36:12.238988613Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 22:36:12.239142 containerd[1471]: time="2025-07-14T22:36:12.239000896Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.239142 containerd[1471]: time="2025-07-14T22:36:12.239016285Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 22:36:12.239142 containerd[1471]: time="2025-07-14T22:36:12.239028458Z" level=info msg="NRI interface is disabled by configuration." Jul 14 22:36:12.239142 containerd[1471]: time="2025-07-14T22:36:12.239041121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:36:12.239491 containerd[1471]: time="2025-07-14T22:36:12.239355270Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:36:12.239491 containerd[1471]: time="2025-07-14T22:36:12.239440630Z" level=info msg="Connect containerd service" Jul 14 22:36:12.239744 containerd[1471]: time="2025-07-14T22:36:12.239516773Z" level=info msg="using legacy CRI server" Jul 14 22:36:12.239744 containerd[1471]: time="2025-07-14T22:36:12.239527563Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 22:36:12.239744 containerd[1471]: time="2025-07-14T22:36:12.239622441Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:36:12.240406 containerd[1471]: time="2025-07-14T22:36:12.240342681Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:36:12.240785 containerd[1471]: time="2025-07-14T22:36:12.240689351Z" level=info msg="Start subscribing containerd event" Jul 14 22:36:12.241533 containerd[1471]: time="2025-07-14T22:36:12.240894235Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:36:12.241533 containerd[1471]: time="2025-07-14T22:36:12.240959387Z" level=info msg="Start recovering state" Jul 14 22:36:12.241533 containerd[1471]: time="2025-07-14T22:36:12.240999963Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:36:12.241533 containerd[1471]: time="2025-07-14T22:36:12.241081496Z" level=info msg="Start event monitor" Jul 14 22:36:12.241533 containerd[1471]: time="2025-07-14T22:36:12.241122713Z" level=info msg="Start snapshots syncer" Jul 14 22:36:12.241533 containerd[1471]: time="2025-07-14T22:36:12.241139445Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:36:12.241533 containerd[1471]: time="2025-07-14T22:36:12.241154553Z" level=info msg="Start streaming server" Jul 14 22:36:12.241533 containerd[1471]: time="2025-07-14T22:36:12.241293784Z" level=info msg="containerd successfully booted in 0.043800s" Jul 14 22:36:12.241484 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 22:36:12.441104 tar[1467]: linux-amd64/README.md Jul 14 22:36:12.454196 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 22:36:13.097654 systemd-networkd[1403]: eth0: Gained IPv6LL Jul 14 22:36:13.101810 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 22:36:13.104036 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 22:36:13.121970 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 22:36:13.126757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:36:13.129346 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 22:36:13.151315 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:36:13.151743 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 22:36:13.153583 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 22:36:13.156827 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 22:36:13.875263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:36:13.893831 (kubelet)[1553]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:36:13.894489 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 22:36:13.895767 systemd[1]: Startup finished in 1.020s (kernel) + 17.152s (initrd) + 4.632s (userspace) = 22.806s. Jul 14 22:36:14.326122 kubelet[1553]: E0714 22:36:14.325978 1553 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:36:14.330398 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:36:14.330651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:36:14.331063 systemd[1]: kubelet.service: Consumed 1.068s CPU time. Jul 14 22:36:15.579353 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 22:36:15.580802 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:38240.service - OpenSSH per-connection server daemon (10.0.0.1:38240). Jul 14 22:36:15.634407 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 38240 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:36:15.636769 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:36:15.646623 systemd-logind[1452]: New session 1 of user core. Jul 14 22:36:15.647942 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 22:36:15.654739 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 22:36:15.667650 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 22:36:15.670950 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 22:36:15.683033 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:36:15.822666 systemd[1571]: Queued start job for default target default.target. Jul 14 22:36:15.834038 systemd[1571]: Created slice app.slice - User Application Slice. Jul 14 22:36:15.834068 systemd[1571]: Reached target paths.target - Paths. Jul 14 22:36:15.834083 systemd[1571]: Reached target timers.target - Timers. Jul 14 22:36:15.835960 systemd[1571]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 22:36:15.849239 systemd[1571]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 22:36:15.849421 systemd[1571]: Reached target sockets.target - Sockets. Jul 14 22:36:15.849520 systemd[1571]: Reached target basic.target - Basic System. Jul 14 22:36:15.849586 systemd[1571]: Reached target default.target - Main User Target. Jul 14 22:36:15.849655 systemd[1571]: Startup finished in 157ms. Jul 14 22:36:15.849836 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 22:36:15.851668 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 22:36:15.912888 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:38254.service - OpenSSH per-connection server daemon (10.0.0.1:38254). Jul 14 22:36:15.957478 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 38254 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:36:15.959545 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:36:15.964214 systemd-logind[1452]: New session 2 of user core. Jul 14 22:36:15.974659 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 22:36:16.031396 sshd[1582]: pam_unix(sshd:session): session closed for user core Jul 14 22:36:16.046738 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:38254.service: Deactivated successfully. Jul 14 22:36:16.048658 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:36:16.050312 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:36:16.051667 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:38256.service - OpenSSH per-connection server daemon (10.0.0.1:38256). Jul 14 22:36:16.052519 systemd-logind[1452]: Removed session 2. Jul 14 22:36:16.092291 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 38256 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:36:16.094353 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:36:16.098815 systemd-logind[1452]: New session 3 of user core. Jul 14 22:36:16.113592 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 22:36:16.164599 sshd[1589]: pam_unix(sshd:session): session closed for user core Jul 14 22:36:16.175852 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:38256.service: Deactivated successfully. Jul 14 22:36:16.178824 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:36:16.180436 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:36:16.188847 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:38266.service - OpenSSH per-connection server daemon (10.0.0.1:38266). Jul 14 22:36:16.190052 systemd-logind[1452]: Removed session 3. Jul 14 22:36:16.223949 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 38266 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:36:16.225944 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:36:16.230773 systemd-logind[1452]: New session 4 of user core. Jul 14 22:36:16.248850 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 22:36:16.304949 sshd[1596]: pam_unix(sshd:session): session closed for user core Jul 14 22:36:16.326612 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:38266.service: Deactivated successfully. Jul 14 22:36:16.328582 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:36:16.330348 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:36:16.344858 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:38274.service - OpenSSH per-connection server daemon (10.0.0.1:38274). Jul 14 22:36:16.346147 systemd-logind[1452]: Removed session 4. Jul 14 22:36:16.379053 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 38274 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:36:16.380765 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:36:16.385409 systemd-logind[1452]: New session 5 of user core. Jul 14 22:36:16.398754 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 22:36:16.460829 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:36:16.461329 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:36:16.487475 sudo[1606]: pam_unix(sudo:session): session closed for user root Jul 14 22:36:16.489933 sshd[1603]: pam_unix(sshd:session): session closed for user core Jul 14 22:36:16.503679 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:38274.service: Deactivated successfully. Jul 14 22:36:16.505523 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:36:16.507214 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:36:16.519740 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:38280.service - OpenSSH per-connection server daemon (10.0.0.1:38280). Jul 14 22:36:16.520766 systemd-logind[1452]: Removed session 5. Jul 14 22:36:16.556883 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 38280 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:36:16.558629 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:36:16.562673 systemd-logind[1452]: New session 6 of user core. Jul 14 22:36:16.578589 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 22:36:16.634482 sudo[1615]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:36:16.634842 sudo[1615]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:36:16.640044 sudo[1615]: pam_unix(sudo:session): session closed for user root Jul 14 22:36:16.646879 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 22:36:16.647223 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:36:16.665680 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 22:36:16.667582 auditctl[1618]: No rules Jul 14 22:36:16.668875 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:36:16.669145 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 22:36:16.670974 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:36:16.707049 augenrules[1636]: No rules Jul 14 22:36:16.709284 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:36:16.710751 sudo[1614]: pam_unix(sudo:session): session closed for user root Jul 14 22:36:16.712792 sshd[1611]: pam_unix(sshd:session): session closed for user core Jul 14 22:36:16.729172 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:38280.service: Deactivated successfully. Jul 14 22:36:16.731262 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:36:16.732995 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:36:16.746818 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:38284.service - OpenSSH per-connection server daemon (10.0.0.1:38284). Jul 14 22:36:16.747934 systemd-logind[1452]: Removed session 6. Jul 14 22:36:16.782548 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 38284 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:36:16.784251 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:36:16.789123 systemd-logind[1452]: New session 7 of user core. Jul 14 22:36:16.800609 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 22:36:16.856361 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:36:16.856841 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:36:17.272695 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 22:36:17.272833 (dockerd)[1666]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 22:36:17.555889 dockerd[1666]: time="2025-07-14T22:36:17.555728982Z" level=info msg="Starting up" Jul 14 22:36:19.161351 dockerd[1666]: time="2025-07-14T22:36:19.161287872Z" level=info msg="Loading containers: start." Jul 14 22:36:19.289479 kernel: Initializing XFRM netlink socket Jul 14 22:36:19.371693 systemd-networkd[1403]: docker0: Link UP Jul 14 22:36:19.604393 dockerd[1666]: time="2025-07-14T22:36:19.604255217Z" level=info msg="Loading containers: done." Jul 14 22:36:19.619648 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck403711423-merged.mount: Deactivated successfully. Jul 14 22:36:19.654887 dockerd[1666]: time="2025-07-14T22:36:19.654805150Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:36:19.655029 dockerd[1666]: time="2025-07-14T22:36:19.654934172Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 22:36:19.655100 dockerd[1666]: time="2025-07-14T22:36:19.655067502Z" level=info msg="Daemon has completed initialization" Jul 14 22:36:19.711773 dockerd[1666]: time="2025-07-14T22:36:19.711701541Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:36:19.712570 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 22:36:24.580922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:36:24.590625 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:36:24.935198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:36:24.939960 (kubelet)[1821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:36:25.077563 kubelet[1821]: E0714 22:36:25.077504 1821 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:36:25.084223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:36:25.084461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:36:25.264716 containerd[1471]: time="2025-07-14T22:36:25.264583351Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 14 22:36:32.815230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount59333552.mount: Deactivated successfully. Jul 14 22:36:33.900628 containerd[1471]: time="2025-07-14T22:36:33.900573087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:33.902035 containerd[1471]: time="2025-07-14T22:36:33.901977760Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 14 22:36:33.903606 containerd[1471]: time="2025-07-14T22:36:33.903564245Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:33.906501 containerd[1471]: time="2025-07-14T22:36:33.906461426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:33.907481 containerd[1471]: time="2025-07-14T22:36:33.907432727Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 8.642812136s" Jul 14 22:36:33.907540 containerd[1471]: time="2025-07-14T22:36:33.907483171Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 14 22:36:33.908101 containerd[1471]: time="2025-07-14T22:36:33.908076113Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 14 22:36:35.334789 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:36:35.347683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:36:35.524375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:36:35.530497 (kubelet)[1894]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:36:35.573197 kubelet[1894]: E0714 22:36:35.573112 1894 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:36:35.578981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:36:35.579226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:36:44.221629 containerd[1471]: time="2025-07-14T22:36:44.221575246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:44.333819 containerd[1471]: time="2025-07-14T22:36:44.333762660Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 14 22:36:44.360231 containerd[1471]: time="2025-07-14T22:36:44.360163135Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:44.420490 containerd[1471]: time="2025-07-14T22:36:44.420405378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:44.421689 containerd[1471]: time="2025-07-14T22:36:44.421656055Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 10.513548763s" Jul 14 22:36:44.421753 containerd[1471]: time="2025-07-14T22:36:44.421694600Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 14 22:36:44.422520 containerd[1471]: time="2025-07-14T22:36:44.422496319Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 14 22:36:45.764725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:36:45.773620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:36:45.939148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:36:45.943761 (kubelet)[1915]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:36:46.076025 kubelet[1915]: E0714 22:36:46.075860 1915 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:36:46.080234 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:36:46.080471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:36:51.013899 containerd[1471]: time="2025-07-14T22:36:51.013821206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:51.057074 containerd[1471]: time="2025-07-14T22:36:51.056991798Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 14 22:36:51.343563 containerd[1471]: time="2025-07-14T22:36:51.343475316Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:51.464289 containerd[1471]: time="2025-07-14T22:36:51.464206372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:51.465690 containerd[1471]: time="2025-07-14T22:36:51.465632768Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 7.043105892s" Jul 14 22:36:51.465690 containerd[1471]: time="2025-07-14T22:36:51.465684747Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 14 22:36:51.466169 containerd[1471]: time="2025-07-14T22:36:51.466135810Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 14 22:36:56.189337 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228992750.mount: Deactivated successfully. Jul 14 22:36:56.190324 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:36:56.201689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:36:56.363125 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:36:56.367697 (kubelet)[1939]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:36:56.401124 kubelet[1939]: E0714 22:36:56.401069 1939 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:36:56.405979 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:36:56.406241 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:36:57.128436 update_engine[1462]: I20250714 22:36:57.128316 1462 update_attempter.cc:509] Updating boot flags... Jul 14 22:36:57.324698 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1955) Jul 14 22:36:57.431687 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1955) Jul 14 22:36:57.720312 containerd[1471]: time="2025-07-14T22:36:57.720235626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:57.723405 containerd[1471]: time="2025-07-14T22:36:57.723341114Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 14 22:36:57.725273 containerd[1471]: time="2025-07-14T22:36:57.725245572Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:57.727470 containerd[1471]: time="2025-07-14T22:36:57.727430312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:36:57.728093 containerd[1471]: time="2025-07-14T22:36:57.728051051Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 6.261887279s" Jul 14 22:36:57.728093 containerd[1471]: time="2025-07-14T22:36:57.728078002Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 14 22:36:57.728483 containerd[1471]: time="2025-07-14T22:36:57.728433277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 14 22:37:00.523812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2145732049.mount: Deactivated successfully. Jul 14 22:37:06.514598 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 14 22:37:06.524878 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:37:06.693961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:37:06.698531 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:37:06.919167 kubelet[1990]: E0714 22:37:06.919000 1990 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:37:06.923855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:37:06.924136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:37:10.346104 containerd[1471]: time="2025-07-14T22:37:10.346033811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:10.395009 containerd[1471]: time="2025-07-14T22:37:10.394925902Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 14 22:37:10.442072 containerd[1471]: time="2025-07-14T22:37:10.442005076Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:10.484735 containerd[1471]: time="2025-07-14T22:37:10.484707237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:10.485688 containerd[1471]: time="2025-07-14T22:37:10.485666385Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 12.757183214s" Jul 14 22:37:10.485738 containerd[1471]: time="2025-07-14T22:37:10.485696221Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 14 22:37:10.486553 containerd[1471]: time="2025-07-14T22:37:10.486528430Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:37:11.057703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4261892117.mount: Deactivated successfully. Jul 14 22:37:11.075802 containerd[1471]: time="2025-07-14T22:37:11.075744544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:11.078980 containerd[1471]: time="2025-07-14T22:37:11.078951028Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 14 22:37:11.080348 containerd[1471]: time="2025-07-14T22:37:11.080317604Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:11.083811 containerd[1471]: time="2025-07-14T22:37:11.083776022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:11.084925 containerd[1471]: time="2025-07-14T22:37:11.084873240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 598.311857ms" Jul 14 22:37:11.085010 containerd[1471]: time="2025-07-14T22:37:11.084927983Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 22:37:11.085576 containerd[1471]: time="2025-07-14T22:37:11.085547791Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 14 22:37:11.597791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount519332243.mount: Deactivated successfully. Jul 14 22:37:14.410547 containerd[1471]: time="2025-07-14T22:37:14.410485851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:14.411614 containerd[1471]: time="2025-07-14T22:37:14.411562037Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 14 22:37:14.412907 containerd[1471]: time="2025-07-14T22:37:14.412866503Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:14.416185 containerd[1471]: time="2025-07-14T22:37:14.416142682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:14.417392 containerd[1471]: time="2025-07-14T22:37:14.417355626Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.331775163s" Jul 14 22:37:14.417392 containerd[1471]: time="2025-07-14T22:37:14.417389830Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 14 22:37:17.014651 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 14 22:37:17.023605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:37:17.183332 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:37:17.193720 (kubelet)[2126]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:37:17.230249 kubelet[2126]: E0714 22:37:17.230179 2126 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:37:17.235118 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:37:17.235344 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:37:18.658059 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:37:18.668689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:37:18.691931 systemd[1]: Reloading requested from client PID 2142 ('systemctl') (unit session-7.scope)... Jul 14 22:37:18.691947 systemd[1]: Reloading... Jul 14 22:37:18.770475 zram_generator::config[2184]: No configuration found. Jul 14 22:37:20.518991 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:37:20.596861 systemd[1]: Reloading finished in 1904 ms. Jul 14 22:37:20.647157 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:37:20.650290 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:37:20.650615 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:37:20.652524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:37:20.830235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:37:20.834881 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:37:20.869792 kubelet[2231]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:37:20.869792 kubelet[2231]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 22:37:20.869792 kubelet[2231]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:37:20.870299 kubelet[2231]: I0714 22:37:20.869812 2231 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:37:21.163418 kubelet[2231]: I0714 22:37:21.163295 2231 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 14 22:37:21.163418 kubelet[2231]: I0714 22:37:21.163334 2231 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:37:21.163587 kubelet[2231]: I0714 22:37:21.163568 2231 server.go:956] "Client rotation is on, will bootstrap in background" Jul 14 22:37:21.184989 kubelet[2231]: E0714 22:37:21.184944 2231 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 14 22:37:21.185588 kubelet[2231]: I0714 22:37:21.185558 2231 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:37:21.191943 kubelet[2231]: E0714 22:37:21.191852 2231 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:37:21.191943 kubelet[2231]: I0714 22:37:21.191901 2231 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:37:21.197515 kubelet[2231]: I0714 22:37:21.197493 2231 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:37:21.197762 kubelet[2231]: I0714 22:37:21.197734 2231 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:37:21.197923 kubelet[2231]: I0714 22:37:21.197757 2231 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:37:21.197923 kubelet[2231]: I0714 22:37:21.197921 2231 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:37:21.198022 kubelet[2231]: I0714 22:37:21.197929 2231 container_manager_linux.go:303] "Creating device plugin manager" Jul 14 22:37:21.198726 kubelet[2231]: I0714 22:37:21.198694 2231 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:37:21.200300 kubelet[2231]: I0714 22:37:21.200270 2231 kubelet.go:480] "Attempting to sync node with API server" Jul 14 22:37:21.200300 kubelet[2231]: I0714 22:37:21.200288 2231 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:37:21.200358 kubelet[2231]: I0714 22:37:21.200309 2231 kubelet.go:386] "Adding apiserver pod source" Jul 14 22:37:21.200358 kubelet[2231]: I0714 22:37:21.200324 2231 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:37:21.204990 kubelet[2231]: I0714 22:37:21.204945 2231 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:37:21.206056 kubelet[2231]: I0714 22:37:21.205365 2231 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 14 22:37:21.207827 kubelet[2231]: W0714 22:37:21.207795 2231 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:37:21.209473 kubelet[2231]: E0714 22:37:21.209024 2231 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 14 22:37:21.209473 kubelet[2231]: E0714 22:37:21.209196 2231 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 14 22:37:21.210730 kubelet[2231]: I0714 22:37:21.210712 2231 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 22:37:21.211101 kubelet[2231]: I0714 22:37:21.210756 2231 server.go:1289] "Started kubelet" Jul 14 22:37:21.211101 kubelet[2231]: I0714 22:37:21.210958 2231 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:37:21.211942 kubelet[2231]: I0714 22:37:21.211912 2231 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:37:21.211992 kubelet[2231]: I0714 22:37:21.211968 2231 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:37:21.212334 kubelet[2231]: I0714 22:37:21.212306 2231 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:37:21.212920 kubelet[2231]: I0714 22:37:21.212884 2231 server.go:317] "Adding debug handlers to kubelet server" Jul 14 22:37:21.213950 kubelet[2231]: I0714 22:37:21.213923 2231 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:37:21.215275 kubelet[2231]: E0714 22:37:21.215007 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:21.215275 kubelet[2231]: I0714 22:37:21.215042 2231 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 22:37:21.216959 kubelet[2231]: I0714 22:37:21.216295 2231 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 22:37:21.216959 kubelet[2231]: I0714 22:37:21.216420 2231 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:37:21.217772 kubelet[2231]: E0714 22:37:21.215963 2231 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523f26e0a49220 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:37:21.210733088 +0000 UTC m=+0.371583371,LastTimestamp:2025-07-14 22:37:21.210733088 +0000 UTC m=+0.371583371,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:37:21.217772 kubelet[2231]: I0714 22:37:21.217317 2231 factory.go:223] Registration of the systemd container factory successfully Jul 14 22:37:21.217772 kubelet[2231]: I0714 22:37:21.217414 2231 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:37:21.217981 kubelet[2231]: E0714 22:37:21.216419 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Jul 14 22:37:21.218694 kubelet[2231]: E0714 22:37:21.218472 2231 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:37:21.219426 kubelet[2231]: E0714 22:37:21.219177 2231 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 14 22:37:21.220212 kubelet[2231]: I0714 22:37:21.220185 2231 factory.go:223] Registration of the containerd container factory successfully Jul 14 22:37:21.235279 kubelet[2231]: I0714 22:37:21.235244 2231 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 22:37:21.235279 kubelet[2231]: I0714 22:37:21.235267 2231 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 22:37:21.235279 kubelet[2231]: I0714 22:37:21.235283 2231 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:37:21.240983 kubelet[2231]: I0714 22:37:21.240757 2231 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 14 22:37:21.242522 kubelet[2231]: I0714 22:37:21.242421 2231 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 14 22:37:21.242522 kubelet[2231]: I0714 22:37:21.242472 2231 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 14 22:37:21.242522 kubelet[2231]: I0714 22:37:21.242498 2231 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 22:37:21.242522 kubelet[2231]: I0714 22:37:21.242524 2231 kubelet.go:2436] "Starting kubelet main sync loop" Jul 14 22:37:21.242678 kubelet[2231]: E0714 22:37:21.242581 2231 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:37:21.243302 kubelet[2231]: E0714 22:37:21.243261 2231 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 14 22:37:21.316003 kubelet[2231]: E0714 22:37:21.315956 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:21.343325 kubelet[2231]: E0714 22:37:21.343283 2231 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:37:21.416247 kubelet[2231]: E0714 22:37:21.416090 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:21.418840 kubelet[2231]: E0714 22:37:21.418800 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Jul 14 22:37:21.516990 kubelet[2231]: E0714 22:37:21.516920 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:21.544330 kubelet[2231]: E0714 22:37:21.544287 2231 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:37:21.617772 kubelet[2231]: E0714 22:37:21.617719 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:21.718917 kubelet[2231]: E0714 22:37:21.718773 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:21.819376 kubelet[2231]: E0714 22:37:21.819332 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:21.819703 kubelet[2231]: E0714 22:37:21.819668 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Jul 14 22:37:21.920165 kubelet[2231]: E0714 22:37:21.920090 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:21.945370 kubelet[2231]: E0714 22:37:21.945290 2231 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:37:22.020789 kubelet[2231]: E0714 22:37:22.020733 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:22.042466 kubelet[2231]: E0714 22:37:22.042416 2231 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 14 22:37:22.118469 kubelet[2231]: E0714 22:37:22.118407 2231 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 14 22:37:22.121772 kubelet[2231]: E0714 22:37:22.121728 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:22.151365 kubelet[2231]: I0714 22:37:22.151341 2231 policy_none.go:49] "None policy: Start" Jul 14 22:37:22.151365 kubelet[2231]: I0714 22:37:22.151362 2231 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 22:37:22.151442 kubelet[2231]: I0714 22:37:22.151375 2231 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:37:22.221839 kubelet[2231]: E0714 22:37:22.221785 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:22.221913 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 14 22:37:22.235670 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 14 22:37:22.239083 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 14 22:37:22.242399 kubelet[2231]: E0714 22:37:22.242343 2231 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 14 22:37:22.252399 kubelet[2231]: E0714 22:37:22.252356 2231 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 14 22:37:22.252697 kubelet[2231]: I0714 22:37:22.252674 2231 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:37:22.252761 kubelet[2231]: I0714 22:37:22.252692 2231 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:37:22.253524 kubelet[2231]: I0714 22:37:22.252937 2231 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:37:22.253790 kubelet[2231]: E0714 22:37:22.253751 2231 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 22:37:22.253852 kubelet[2231]: E0714 22:37:22.253809 2231 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:37:22.354584 kubelet[2231]: I0714 22:37:22.354480 2231 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:37:22.354799 kubelet[2231]: E0714 22:37:22.354768 2231 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:37:22.556983 kubelet[2231]: I0714 22:37:22.556933 2231 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:37:22.557426 kubelet[2231]: E0714 22:37:22.557371 2231 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:37:22.620974 kubelet[2231]: E0714 22:37:22.620859 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="1.6s" Jul 14 22:37:22.801059 kubelet[2231]: E0714 22:37:22.801014 2231 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 14 22:37:22.825628 kubelet[2231]: I0714 22:37:22.825569 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:22.825628 kubelet[2231]: I0714 22:37:22.825613 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:22.825731 kubelet[2231]: I0714 22:37:22.825646 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:22.825731 kubelet[2231]: I0714 22:37:22.825668 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:22.826141 kubelet[2231]: I0714 22:37:22.826099 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:22.855962 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 14 22:37:22.868349 kubelet[2231]: E0714 22:37:22.868299 2231 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:37:22.926934 kubelet[2231]: I0714 22:37:22.926754 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:37:22.936694 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 14 22:37:22.938579 kubelet[2231]: E0714 22:37:22.938540 2231 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:37:22.958646 kubelet[2231]: I0714 22:37:22.958625 2231 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:37:22.959071 kubelet[2231]: E0714 22:37:22.959035 2231 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:37:23.016325 systemd[1]: Created slice kubepods-burstable-poda8e7932053f7cf4b2d8e75315fa870ec.slice - libcontainer container kubepods-burstable-poda8e7932053f7cf4b2d8e75315fa870ec.slice. Jul 14 22:37:23.018349 kubelet[2231]: E0714 22:37:23.018311 2231 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:37:23.027421 kubelet[2231]: I0714 22:37:23.027364 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8e7932053f7cf4b2d8e75315fa870ec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e7932053f7cf4b2d8e75315fa870ec\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:37:23.027533 kubelet[2231]: I0714 22:37:23.027424 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8e7932053f7cf4b2d8e75315fa870ec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e7932053f7cf4b2d8e75315fa870ec\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:37:23.027533 kubelet[2231]: I0714 22:37:23.027495 2231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8e7932053f7cf4b2d8e75315fa870ec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8e7932053f7cf4b2d8e75315fa870ec\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:37:23.169297 kubelet[2231]: E0714 22:37:23.169231 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:23.170141 containerd[1471]: time="2025-07-14T22:37:23.170062895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 14 22:37:23.239952 kubelet[2231]: E0714 22:37:23.239822 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:23.240494 containerd[1471]: time="2025-07-14T22:37:23.240387705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 14 22:37:23.273715 kubelet[2231]: E0714 22:37:23.273665 2231 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 14 22:37:23.319428 kubelet[2231]: E0714 22:37:23.319386 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:23.319984 containerd[1471]: time="2025-07-14T22:37:23.319922074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8e7932053f7cf4b2d8e75315fa870ec,Namespace:kube-system,Attempt:0,}" Jul 14 22:37:23.760825 kubelet[2231]: I0714 22:37:23.760791 2231 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:37:23.761250 kubelet[2231]: E0714 22:37:23.761204 2231 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jul 14 22:37:24.221714 kubelet[2231]: E0714 22:37:24.221651 2231 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="3.2s" Jul 14 22:37:24.361797 kubelet[2231]: E0714 22:37:24.361753 2231 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 14 22:37:24.448115 kubelet[2231]: E0714 22:37:24.448049 2231 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 14 22:37:24.506206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount891798142.mount: Deactivated successfully. Jul 14 22:37:24.514383 containerd[1471]: time="2025-07-14T22:37:24.514313883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:37:24.516731 containerd[1471]: time="2025-07-14T22:37:24.516686893Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:37:24.518016 containerd[1471]: time="2025-07-14T22:37:24.517963140Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:37:24.519229 containerd[1471]: time="2025-07-14T22:37:24.519187020Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:37:24.520788 containerd[1471]: time="2025-07-14T22:37:24.520756268Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:37:24.521679 containerd[1471]: time="2025-07-14T22:37:24.521641412Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:37:24.522700 containerd[1471]: time="2025-07-14T22:37:24.522663152Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 14 22:37:24.524755 containerd[1471]: time="2025-07-14T22:37:24.524724485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:37:24.526784 containerd[1471]: time="2025-07-14T22:37:24.526726827Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.356563223s" Jul 14 22:37:24.527254 containerd[1471]: time="2025-07-14T22:37:24.527210577Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.207206187s" Jul 14 22:37:24.527805 containerd[1471]: time="2025-07-14T22:37:24.527774035Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.287235195s" Jul 14 22:37:24.671263 containerd[1471]: time="2025-07-14T22:37:24.670925889Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:37:24.671263 containerd[1471]: time="2025-07-14T22:37:24.670984569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:37:24.671263 containerd[1471]: time="2025-07-14T22:37:24.671009395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:37:24.671263 containerd[1471]: time="2025-07-14T22:37:24.671100957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:37:24.672685 containerd[1471]: time="2025-07-14T22:37:24.672111216Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:37:24.672685 containerd[1471]: time="2025-07-14T22:37:24.672236140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:37:24.672685 containerd[1471]: time="2025-07-14T22:37:24.672256117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:37:24.672685 containerd[1471]: time="2025-07-14T22:37:24.672521156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:37:24.675159 containerd[1471]: time="2025-07-14T22:37:24.675005503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:37:24.675159 containerd[1471]: time="2025-07-14T22:37:24.675062000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:37:24.675159 containerd[1471]: time="2025-07-14T22:37:24.675101244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:37:24.676482 containerd[1471]: time="2025-07-14T22:37:24.675940169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:37:24.694660 systemd[1]: Started cri-containerd-83a6824b814530907cf1561f005a4b73dd82d2b9fb93ff6f5667f627e10f6404.scope - libcontainer container 83a6824b814530907cf1561f005a4b73dd82d2b9fb93ff6f5667f627e10f6404. Jul 14 22:37:24.699039 systemd[1]: Started cri-containerd-1a4f3332b362f8544d29a56248c5120d38083e2383ecd257731b8db9595e4940.scope - libcontainer container 1a4f3332b362f8544d29a56248c5120d38083e2383ecd257731b8db9595e4940. Jul 14 22:37:24.701485 systemd[1]: Started cri-containerd-8360883559de0a4a0a38d21ea1fe3e652ebbb9c9914620560ef8c06bfc65fee4.scope - libcontainer container 8360883559de0a4a0a38d21ea1fe3e652ebbb9c9914620560ef8c06bfc65fee4. Jul 14 22:37:24.736590 containerd[1471]: time="2025-07-14T22:37:24.736490274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"83a6824b814530907cf1561f005a4b73dd82d2b9fb93ff6f5667f627e10f6404\"" Jul 14 22:37:24.738121 kubelet[2231]: E0714 22:37:24.738059 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:24.738803 containerd[1471]: time="2025-07-14T22:37:24.738731746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8e7932053f7cf4b2d8e75315fa870ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a4f3332b362f8544d29a56248c5120d38083e2383ecd257731b8db9595e4940\"" Jul 14 22:37:24.739356 kubelet[2231]: E0714 22:37:24.739333 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:24.742204 containerd[1471]: time="2025-07-14T22:37:24.742178161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"8360883559de0a4a0a38d21ea1fe3e652ebbb9c9914620560ef8c06bfc65fee4\"" Jul 14 22:37:24.742624 kubelet[2231]: E0714 22:37:24.742598 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:24.750021 containerd[1471]: time="2025-07-14T22:37:24.749982606Z" level=info msg="CreateContainer within sandbox \"83a6824b814530907cf1561f005a4b73dd82d2b9fb93ff6f5667f627e10f6404\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:37:24.764889 containerd[1471]: time="2025-07-14T22:37:24.764820467Z" level=info msg="CreateContainer within sandbox \"1a4f3332b362f8544d29a56248c5120d38083e2383ecd257731b8db9595e4940\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:37:24.779608 containerd[1471]: time="2025-07-14T22:37:24.779568858Z" level=info msg="CreateContainer within sandbox \"8360883559de0a4a0a38d21ea1fe3e652ebbb9c9914620560ef8c06bfc65fee4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:37:24.897117 containerd[1471]: time="2025-07-14T22:37:24.897051239Z" level=info msg="CreateContainer within sandbox \"83a6824b814530907cf1561f005a4b73dd82d2b9fb93ff6f5667f627e10f6404\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"480cdb4cf59eb1b1c002e2365ddef4278fdd2b467839df9f6b958027edb94a26\"" Jul 14 22:37:24.897723 containerd[1471]: time="2025-07-14T22:37:24.897700428Z" level=info msg="StartContainer for \"480cdb4cf59eb1b1c002e2365ddef4278fdd2b467839df9f6b958027edb94a26\"" Jul 14 22:37:24.928612 containerd[1471]: time="2025-07-14T22:37:24.928557889Z" level=info msg="CreateContainer within sandbox \"1a4f3332b362f8544d29a56248c5120d38083e2383ecd257731b8db9595e4940\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5cff8a81d9ecba5a3122932b006c019697113a3695f288ec551ac275ec2e9e85\"" Jul 14 22:37:24.928972 containerd[1471]: time="2025-07-14T22:37:24.928938395Z" level=info msg="StartContainer for \"5cff8a81d9ecba5a3122932b006c019697113a3695f288ec551ac275ec2e9e85\"" Jul 14 22:37:24.930627 systemd[1]: Started cri-containerd-480cdb4cf59eb1b1c002e2365ddef4278fdd2b467839df9f6b958027edb94a26.scope - libcontainer container 480cdb4cf59eb1b1c002e2365ddef4278fdd2b467839df9f6b958027edb94a26. Jul 14 22:37:24.943178 kubelet[2231]: E0714 22:37:24.943129 2231 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 14 22:37:24.945289 containerd[1471]: time="2025-07-14T22:37:24.945243963Z" level=info msg="CreateContainer within sandbox \"8360883559de0a4a0a38d21ea1fe3e652ebbb9c9914620560ef8c06bfc65fee4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"764d714f673e8c2abcd6f027547721a874d6e5ee308c3d3f7100d203b83b2df0\"" Jul 14 22:37:24.946042 containerd[1471]: time="2025-07-14T22:37:24.945995966Z" level=info msg="StartContainer for \"764d714f673e8c2abcd6f027547721a874d6e5ee308c3d3f7100d203b83b2df0\"" Jul 14 22:37:24.957620 systemd[1]: Started cri-containerd-5cff8a81d9ecba5a3122932b006c019697113a3695f288ec551ac275ec2e9e85.scope - libcontainer container 5cff8a81d9ecba5a3122932b006c019697113a3695f288ec551ac275ec2e9e85. Jul 14 22:37:24.982828 systemd[1]: Started cri-containerd-764d714f673e8c2abcd6f027547721a874d6e5ee308c3d3f7100d203b83b2df0.scope - libcontainer container 764d714f673e8c2abcd6f027547721a874d6e5ee308c3d3f7100d203b83b2df0. Jul 14 22:37:24.990208 containerd[1471]: time="2025-07-14T22:37:24.990148057Z" level=info msg="StartContainer for \"480cdb4cf59eb1b1c002e2365ddef4278fdd2b467839df9f6b958027edb94a26\" returns successfully" Jul 14 22:37:25.015759 containerd[1471]: time="2025-07-14T22:37:25.015609877Z" level=info msg="StartContainer for \"5cff8a81d9ecba5a3122932b006c019697113a3695f288ec551ac275ec2e9e85\" returns successfully" Jul 14 22:37:25.043244 containerd[1471]: time="2025-07-14T22:37:25.043166098Z" level=info msg="StartContainer for \"764d714f673e8c2abcd6f027547721a874d6e5ee308c3d3f7100d203b83b2df0\" returns successfully" Jul 14 22:37:25.252723 kubelet[2231]: E0714 22:37:25.252677 2231 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:37:25.253194 kubelet[2231]: E0714 22:37:25.252787 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:25.254250 kubelet[2231]: E0714 22:37:25.254224 2231 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:37:25.254330 kubelet[2231]: E0714 22:37:25.254309 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:25.256227 kubelet[2231]: E0714 22:37:25.256203 2231 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:37:25.256324 kubelet[2231]: E0714 22:37:25.256303 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:25.362841 kubelet[2231]: I0714 22:37:25.362719 2231 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:37:26.258474 kubelet[2231]: E0714 22:37:26.258425 2231 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:37:26.258928 kubelet[2231]: E0714 22:37:26.258505 2231 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:37:26.258928 kubelet[2231]: E0714 22:37:26.258566 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:26.258928 kubelet[2231]: E0714 22:37:26.258586 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:26.301804 kubelet[2231]: I0714 22:37:26.301762 2231 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 22:37:26.301804 kubelet[2231]: E0714 22:37:26.301791 2231 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 22:37:26.767819 kubelet[2231]: E0714 22:37:26.767766 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:26.868280 kubelet[2231]: E0714 22:37:26.868234 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:26.968783 kubelet[2231]: E0714 22:37:26.968736 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:27.069894 kubelet[2231]: E0714 22:37:27.069770 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:27.170535 kubelet[2231]: E0714 22:37:27.170444 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:27.260229 kubelet[2231]: E0714 22:37:27.260176 2231 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:37:27.260739 kubelet[2231]: E0714 22:37:27.260432 2231 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 14 22:37:27.260739 kubelet[2231]: E0714 22:37:27.260590 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:27.261440 kubelet[2231]: E0714 22:37:27.261404 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:27.270906 kubelet[2231]: E0714 22:37:27.270852 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:27.371871 kubelet[2231]: E0714 22:37:27.371725 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:27.472580 kubelet[2231]: E0714 22:37:27.472526 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:27.573079 kubelet[2231]: E0714 22:37:27.573039 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:27.673914 kubelet[2231]: E0714 22:37:27.673761 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:27.774876 kubelet[2231]: E0714 22:37:27.774838 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:27.875478 kubelet[2231]: E0714 22:37:27.875415 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:27.976186 kubelet[2231]: E0714 22:37:27.976047 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:28.076627 kubelet[2231]: E0714 22:37:28.076566 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:28.177252 kubelet[2231]: E0714 22:37:28.177186 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:28.277667 kubelet[2231]: E0714 22:37:28.277604 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:28.378300 kubelet[2231]: E0714 22:37:28.378227 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:28.478875 kubelet[2231]: E0714 22:37:28.478822 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:28.579790 kubelet[2231]: E0714 22:37:28.579550 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:28.680284 kubelet[2231]: E0714 22:37:28.680230 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:28.780930 kubelet[2231]: E0714 22:37:28.780858 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:28.881803 kubelet[2231]: E0714 22:37:28.881640 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:28.982109 kubelet[2231]: E0714 22:37:28.982059 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:29.082604 kubelet[2231]: E0714 22:37:29.082530 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:29.183277 kubelet[2231]: E0714 22:37:29.183140 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:29.284280 kubelet[2231]: E0714 22:37:29.284241 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:29.385225 kubelet[2231]: E0714 22:37:29.385163 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:29.486039 kubelet[2231]: E0714 22:37:29.485873 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:29.587041 kubelet[2231]: E0714 22:37:29.586976 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:29.687817 kubelet[2231]: E0714 22:37:29.687759 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:29.788389 kubelet[2231]: E0714 22:37:29.788336 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:29.888968 kubelet[2231]: E0714 22:37:29.888904 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:29.989563 kubelet[2231]: E0714 22:37:29.989512 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:30.090785 kubelet[2231]: E0714 22:37:30.090636 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:30.191393 kubelet[2231]: E0714 22:37:30.191338 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:30.291911 kubelet[2231]: E0714 22:37:30.291860 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:30.392470 kubelet[2231]: E0714 22:37:30.392306 2231 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:37:30.516127 kubelet[2231]: I0714 22:37:30.516076 2231 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:31.013467 kubelet[2231]: I0714 22:37:31.013398 2231 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 22:37:31.068623 kubelet[2231]: I0714 22:37:31.068574 2231 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 22:37:31.214629 kubelet[2231]: I0714 22:37:31.214585 2231 apiserver.go:52] "Watching apiserver" Jul 14 22:37:31.216433 kubelet[2231]: I0714 22:37:31.216373 2231 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 22:37:31.216762 kubelet[2231]: E0714 22:37:31.216739 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:31.216943 kubelet[2231]: E0714 22:37:31.216865 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:31.217073 kubelet[2231]: E0714 22:37:31.217050 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:31.544677 kubelet[2231]: I0714 22:37:31.544583 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.544564191 podStartE2EDuration="1.544564191s" podCreationTimestamp="2025-07-14 22:37:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:37:31.407958454 +0000 UTC m=+10.568808737" watchObservedRunningTime="2025-07-14 22:37:31.544564191 +0000 UTC m=+10.705414474" Jul 14 22:37:31.545175 kubelet[2231]: I0714 22:37:31.544713 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.544705997 podStartE2EDuration="544.705997ms" podCreationTimestamp="2025-07-14 22:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:37:31.544656054 +0000 UTC m=+10.705506337" watchObservedRunningTime="2025-07-14 22:37:31.544705997 +0000 UTC m=+10.705556300" Jul 14 22:37:32.306989 kubelet[2231]: E0714 22:37:32.306916 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:37.200274 kubelet[2231]: E0714 22:37:37.199994 2231 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:37.492484 kubelet[2231]: I0714 22:37:37.492317 2231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.492283678 podStartE2EDuration="6.492283678s" podCreationTimestamp="2025-07-14 22:37:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:37:31.588739114 +0000 UTC m=+10.749589397" watchObservedRunningTime="2025-07-14 22:37:37.492283678 +0000 UTC m=+16.653133962" Jul 14 22:37:39.882680 systemd[1]: Reloading requested from client PID 2520 ('systemctl') (unit session-7.scope)... Jul 14 22:37:39.882695 systemd[1]: Reloading... Jul 14 22:37:40.016632 zram_generator::config[2559]: No configuration found. Jul 14 22:37:40.083547 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:37:40.175394 systemd[1]: Reloading finished in 292 ms. Jul 14 22:37:40.223024 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:37:40.249930 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:37:40.250245 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:37:40.250335 systemd[1]: kubelet.service: Consumed 1.237s CPU time, 135.4M memory peak, 0B memory swap peak. Jul 14 22:37:40.262807 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:37:40.428786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:37:40.433646 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:37:40.469102 kubelet[2604]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:37:40.469102 kubelet[2604]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 14 22:37:40.469102 kubelet[2604]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:37:40.469475 kubelet[2604]: I0714 22:37:40.469167 2604 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:37:40.476398 kubelet[2604]: I0714 22:37:40.476359 2604 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 14 22:37:40.476398 kubelet[2604]: I0714 22:37:40.476382 2604 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:37:40.476691 kubelet[2604]: I0714 22:37:40.476635 2604 server.go:956] "Client rotation is on, will bootstrap in background" Jul 14 22:37:40.478041 kubelet[2604]: I0714 22:37:40.478012 2604 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 14 22:37:40.480308 kubelet[2604]: I0714 22:37:40.480275 2604 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:37:40.483724 kubelet[2604]: E0714 22:37:40.483661 2604 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:37:40.483724 kubelet[2604]: I0714 22:37:40.483709 2604 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:37:40.489259 kubelet[2604]: I0714 22:37:40.489221 2604 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:37:40.489479 kubelet[2604]: I0714 22:37:40.489422 2604 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:37:40.489606 kubelet[2604]: I0714 22:37:40.489480 2604 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 14 22:37:40.489695 kubelet[2604]: I0714 22:37:40.489610 2604 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:37:40.489695 kubelet[2604]: I0714 22:37:40.489620 2604 container_manager_linux.go:303] "Creating device plugin manager" Jul 14 22:37:40.489695 kubelet[2604]: I0714 22:37:40.489669 2604 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:37:40.489842 kubelet[2604]: I0714 22:37:40.489821 2604 kubelet.go:480] "Attempting to sync node with API server" Jul 14 22:37:40.489842 kubelet[2604]: I0714 22:37:40.489837 2604 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:37:40.489889 kubelet[2604]: I0714 22:37:40.489856 2604 kubelet.go:386] "Adding apiserver pod source" Jul 14 22:37:40.489889 kubelet[2604]: I0714 22:37:40.489878 2604 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:37:40.490692 kubelet[2604]: I0714 22:37:40.490584 2604 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:37:40.493520 kubelet[2604]: I0714 22:37:40.493488 2604 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 14 22:37:40.501006 kubelet[2604]: I0714 22:37:40.500982 2604 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 14 22:37:40.501096 kubelet[2604]: I0714 22:37:40.501025 2604 server.go:1289] "Started kubelet" Jul 14 22:37:40.503471 kubelet[2604]: I0714 22:37:40.503015 2604 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:37:40.503471 kubelet[2604]: I0714 22:37:40.503154 2604 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:37:40.504161 kubelet[2604]: I0714 22:37:40.504086 2604 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:37:40.504630 kubelet[2604]: I0714 22:37:40.504497 2604 server.go:317] "Adding debug handlers to kubelet server" Jul 14 22:37:40.504686 kubelet[2604]: I0714 22:37:40.504642 2604 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:37:40.506145 kubelet[2604]: I0714 22:37:40.506102 2604 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:37:40.506802 kubelet[2604]: E0714 22:37:40.506767 2604 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:37:40.507718 kubelet[2604]: I0714 22:37:40.507338 2604 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 14 22:37:40.507718 kubelet[2604]: I0714 22:37:40.507462 2604 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 14 22:37:40.507718 kubelet[2604]: I0714 22:37:40.507573 2604 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:37:40.508946 kubelet[2604]: I0714 22:37:40.508725 2604 factory.go:223] Registration of the systemd container factory successfully Jul 14 22:37:40.508946 kubelet[2604]: I0714 22:37:40.508825 2604 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:37:40.509956 kubelet[2604]: I0714 22:37:40.509926 2604 factory.go:223] Registration of the containerd container factory successfully Jul 14 22:37:40.519485 kubelet[2604]: I0714 22:37:40.519406 2604 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 14 22:37:40.520848 kubelet[2604]: I0714 22:37:40.520819 2604 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 14 22:37:40.520848 kubelet[2604]: I0714 22:37:40.520841 2604 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 14 22:37:40.520928 kubelet[2604]: I0714 22:37:40.520861 2604 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 14 22:37:40.520928 kubelet[2604]: I0714 22:37:40.520869 2604 kubelet.go:2436] "Starting kubelet main sync loop" Jul 14 22:37:40.520928 kubelet[2604]: E0714 22:37:40.520911 2604 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:37:40.546568 kubelet[2604]: I0714 22:37:40.546533 2604 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 14 22:37:40.546568 kubelet[2604]: I0714 22:37:40.546554 2604 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 14 22:37:40.546568 kubelet[2604]: I0714 22:37:40.546573 2604 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:37:40.546735 kubelet[2604]: I0714 22:37:40.546711 2604 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:37:40.546735 kubelet[2604]: I0714 22:37:40.546721 2604 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:37:40.546813 kubelet[2604]: I0714 22:37:40.546736 2604 policy_none.go:49] "None policy: Start" Jul 14 22:37:40.546813 kubelet[2604]: I0714 22:37:40.546747 2604 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 14 22:37:40.546813 kubelet[2604]: I0714 22:37:40.546757 2604 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:37:40.546872 kubelet[2604]: I0714 22:37:40.546842 2604 state_mem.go:75] "Updated machine memory state" Jul 14 22:37:40.550999 kubelet[2604]: E0714 22:37:40.550968 2604 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 14 22:37:40.551170 kubelet[2604]: I0714 22:37:40.551147 2604 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:37:40.551251 kubelet[2604]: I0714 22:37:40.551167 2604 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:37:40.551589 kubelet[2604]: I0714 22:37:40.551570 2604 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:37:40.552331 kubelet[2604]: E0714 22:37:40.552314 2604 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 14 22:37:40.622380 kubelet[2604]: I0714 22:37:40.622329 2604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 14 22:37:40.622555 kubelet[2604]: I0714 22:37:40.622329 2604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 14 22:37:40.622555 kubelet[2604]: I0714 22:37:40.622542 2604 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:40.657086 kubelet[2604]: I0714 22:37:40.657052 2604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 14 22:37:40.773734 kubelet[2604]: E0714 22:37:40.773633 2604 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:37:40.773734 kubelet[2604]: E0714 22:37:40.773728 2604 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 14 22:37:40.808109 kubelet[2604]: I0714 22:37:40.808068 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8e7932053f7cf4b2d8e75315fa870ec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8e7932053f7cf4b2d8e75315fa870ec\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:37:40.808109 kubelet[2604]: I0714 22:37:40.808110 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:40.808257 kubelet[2604]: I0714 22:37:40.808140 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:40.808257 kubelet[2604]: I0714 22:37:40.808166 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:40.808257 kubelet[2604]: I0714 22:37:40.808198 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:37:40.808257 kubelet[2604]: I0714 22:37:40.808252 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8e7932053f7cf4b2d8e75315fa870ec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e7932053f7cf4b2d8e75315fa870ec\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:37:40.808369 kubelet[2604]: I0714 22:37:40.808303 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8e7932053f7cf4b2d8e75315fa870ec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8e7932053f7cf4b2d8e75315fa870ec\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:37:40.808369 kubelet[2604]: I0714 22:37:40.808336 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:40.808416 kubelet[2604]: I0714 22:37:40.808379 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:40.817627 kubelet[2604]: E0714 22:37:40.817598 2604 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 14 22:37:40.915156 kubelet[2604]: I0714 22:37:40.915076 2604 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 14 22:37:40.915156 kubelet[2604]: I0714 22:37:40.915160 2604 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 14 22:37:41.074062 kubelet[2604]: E0714 22:37:41.073936 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:41.074062 kubelet[2604]: E0714 22:37:41.073936 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:41.117896 kubelet[2604]: E0714 22:37:41.117838 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:41.490943 kubelet[2604]: I0714 22:37:41.490790 2604 apiserver.go:52] "Watching apiserver" Jul 14 22:37:41.507602 kubelet[2604]: I0714 22:37:41.507525 2604 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 14 22:37:41.533037 kubelet[2604]: E0714 22:37:41.532919 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:41.533037 kubelet[2604]: E0714 22:37:41.532982 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:41.533037 kubelet[2604]: E0714 22:37:41.533041 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:42.534393 kubelet[2604]: E0714 22:37:42.534347 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:42.534957 kubelet[2604]: E0714 22:37:42.534542 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:43.203067 kubelet[2604]: I0714 22:37:43.203033 2604 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:37:43.203380 containerd[1471]: time="2025-07-14T22:37:43.203336038Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:37:43.203803 kubelet[2604]: I0714 22:37:43.203535 2604 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:37:46.583216 kubelet[2604]: E0714 22:37:46.583093 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:47.195746 systemd[1]: Created slice kubepods-besteffort-pod232a2360_0f89_4d74_8677_be2dcd899610.slice - libcontainer container kubepods-besteffort-pod232a2360_0f89_4d74_8677_be2dcd899610.slice. Jul 14 22:37:47.246168 kubelet[2604]: I0714 22:37:47.246068 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/232a2360-0f89-4d74-8677-be2dcd899610-xtables-lock\") pod \"kube-proxy-tqpj5\" (UID: \"232a2360-0f89-4d74-8677-be2dcd899610\") " pod="kube-system/kube-proxy-tqpj5" Jul 14 22:37:47.246168 kubelet[2604]: I0714 22:37:47.246164 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/232a2360-0f89-4d74-8677-be2dcd899610-lib-modules\") pod \"kube-proxy-tqpj5\" (UID: \"232a2360-0f89-4d74-8677-be2dcd899610\") " pod="kube-system/kube-proxy-tqpj5" Jul 14 22:37:47.246390 kubelet[2604]: I0714 22:37:47.246197 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twf5p\" (UniqueName: \"kubernetes.io/projected/232a2360-0f89-4d74-8677-be2dcd899610-kube-api-access-twf5p\") pod \"kube-proxy-tqpj5\" (UID: \"232a2360-0f89-4d74-8677-be2dcd899610\") " pod="kube-system/kube-proxy-tqpj5" Jul 14 22:37:47.246390 kubelet[2604]: I0714 22:37:47.246228 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/232a2360-0f89-4d74-8677-be2dcd899610-kube-proxy\") pod \"kube-proxy-tqpj5\" (UID: \"232a2360-0f89-4d74-8677-be2dcd899610\") " pod="kube-system/kube-proxy-tqpj5" Jul 14 22:37:47.505200 kubelet[2604]: E0714 22:37:47.505043 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:47.505651 containerd[1471]: time="2025-07-14T22:37:47.505606527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqpj5,Uid:232a2360-0f89-4d74-8677-be2dcd899610,Namespace:kube-system,Attempt:0,}" Jul 14 22:37:47.541934 kubelet[2604]: E0714 22:37:47.541902 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:47.632093 kubelet[2604]: E0714 22:37:47.632042 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:47.636884 kubelet[2604]: E0714 22:37:47.636842 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:47.730845 containerd[1471]: time="2025-07-14T22:37:47.730740486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:37:47.730845 containerd[1471]: time="2025-07-14T22:37:47.730809986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:37:47.731079 containerd[1471]: time="2025-07-14T22:37:47.731037894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:37:47.731817 containerd[1471]: time="2025-07-14T22:37:47.731752223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:37:47.761731 systemd[1]: Started cri-containerd-f32f33c3e32605f9988f29e763365276ea0b5006c7d11c0f83149144b94b3376.scope - libcontainer container f32f33c3e32605f9988f29e763365276ea0b5006c7d11c0f83149144b94b3376. Jul 14 22:37:47.786258 containerd[1471]: time="2025-07-14T22:37:47.786206735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tqpj5,Uid:232a2360-0f89-4d74-8677-be2dcd899610,Namespace:kube-system,Attempt:0,} returns sandbox id \"f32f33c3e32605f9988f29e763365276ea0b5006c7d11c0f83149144b94b3376\"" Jul 14 22:37:47.787605 kubelet[2604]: E0714 22:37:47.787581 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:47.876318 containerd[1471]: time="2025-07-14T22:37:47.876272568Z" level=info msg="CreateContainer within sandbox \"f32f33c3e32605f9988f29e763365276ea0b5006c7d11c0f83149144b94b3376\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:37:48.427341 containerd[1471]: time="2025-07-14T22:37:48.427250029Z" level=info msg="CreateContainer within sandbox \"f32f33c3e32605f9988f29e763365276ea0b5006c7d11c0f83149144b94b3376\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c5bcaa7bb7bf51b02fe1a09845fc97ef2f356f373fa6a266ec3b9743ea0b85a1\"" Jul 14 22:37:48.427916 containerd[1471]: time="2025-07-14T22:37:48.427855134Z" level=info msg="StartContainer for \"c5bcaa7bb7bf51b02fe1a09845fc97ef2f356f373fa6a266ec3b9743ea0b85a1\"" Jul 14 22:37:48.464652 systemd[1]: Started cri-containerd-c5bcaa7bb7bf51b02fe1a09845fc97ef2f356f373fa6a266ec3b9743ea0b85a1.scope - libcontainer container c5bcaa7bb7bf51b02fe1a09845fc97ef2f356f373fa6a266ec3b9743ea0b85a1. Jul 14 22:37:48.563624 containerd[1471]: time="2025-07-14T22:37:48.563492498Z" level=info msg="StartContainer for \"c5bcaa7bb7bf51b02fe1a09845fc97ef2f356f373fa6a266ec3b9743ea0b85a1\" returns successfully" Jul 14 22:37:48.567653 kubelet[2604]: E0714 22:37:48.567613 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:48.567984 kubelet[2604]: E0714 22:37:48.567938 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:49.569189 kubelet[2604]: E0714 22:37:49.568939 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:49.569189 kubelet[2604]: E0714 22:37:49.569066 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:49.569189 kubelet[2604]: E0714 22:37:49.569100 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:37:53.187107 kubelet[2604]: I0714 22:37:53.186943 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tqpj5" podStartSLOduration=9.18692348 podStartE2EDuration="9.18692348s" podCreationTimestamp="2025-07-14 22:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:37:49.594812504 +0000 UTC m=+9.157117170" watchObservedRunningTime="2025-07-14 22:37:53.18692348 +0000 UTC m=+12.749228116" Jul 14 22:37:53.214384 systemd[1]: Created slice kubepods-besteffort-podd9acf755_aebe_46d3_a086_980aa13babe5.slice - libcontainer container kubepods-besteffort-podd9acf755_aebe_46d3_a086_980aa13babe5.slice. Jul 14 22:37:53.277400 kubelet[2604]: I0714 22:37:53.277347 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2slk\" (UniqueName: \"kubernetes.io/projected/d9acf755-aebe-46d3-a086-980aa13babe5-kube-api-access-r2slk\") pod \"tigera-operator-747864d56d-8ngcn\" (UID: \"d9acf755-aebe-46d3-a086-980aa13babe5\") " pod="tigera-operator/tigera-operator-747864d56d-8ngcn" Jul 14 22:37:53.277400 kubelet[2604]: I0714 22:37:53.277386 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9acf755-aebe-46d3-a086-980aa13babe5-var-lib-calico\") pod \"tigera-operator-747864d56d-8ngcn\" (UID: \"d9acf755-aebe-46d3-a086-980aa13babe5\") " pod="tigera-operator/tigera-operator-747864d56d-8ngcn" Jul 14 22:37:53.819723 containerd[1471]: time="2025-07-14T22:37:53.819599992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8ngcn,Uid:d9acf755-aebe-46d3-a086-980aa13babe5,Namespace:tigera-operator,Attempt:0,}" Jul 14 22:37:53.851796 containerd[1471]: time="2025-07-14T22:37:53.851676138Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:37:53.851980 containerd[1471]: time="2025-07-14T22:37:53.851802913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:37:53.852440 containerd[1471]: time="2025-07-14T22:37:53.852389757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:37:53.852600 containerd[1471]: time="2025-07-14T22:37:53.852540087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:37:53.876617 systemd[1]: Started cri-containerd-5814ea07efcaa12b52a90e596f15aabff34eaa520f1d79c30fe4d18554564650.scope - libcontainer container 5814ea07efcaa12b52a90e596f15aabff34eaa520f1d79c30fe4d18554564650. Jul 14 22:37:53.918898 containerd[1471]: time="2025-07-14T22:37:53.918836323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-8ngcn,Uid:d9acf755-aebe-46d3-a086-980aa13babe5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5814ea07efcaa12b52a90e596f15aabff34eaa520f1d79c30fe4d18554564650\"" Jul 14 22:37:53.920511 containerd[1471]: time="2025-07-14T22:37:53.920419563Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 22:37:56.080924 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1898963912.mount: Deactivated successfully. Jul 14 22:37:57.697252 containerd[1471]: time="2025-07-14T22:37:57.697152014Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:57.702131 containerd[1471]: time="2025-07-14T22:37:57.702052608Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 14 22:37:57.707165 containerd[1471]: time="2025-07-14T22:37:57.707062111Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:57.711330 containerd[1471]: time="2025-07-14T22:37:57.711246825Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:37:57.712551 containerd[1471]: time="2025-07-14T22:37:57.712235380Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.791757635s" Jul 14 22:37:57.712551 containerd[1471]: time="2025-07-14T22:37:57.712300917Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 14 22:37:57.721868 containerd[1471]: time="2025-07-14T22:37:57.721795683Z" level=info msg="CreateContainer within sandbox \"5814ea07efcaa12b52a90e596f15aabff34eaa520f1d79c30fe4d18554564650\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 22:37:57.748999 containerd[1471]: time="2025-07-14T22:37:57.748922586Z" level=info msg="CreateContainer within sandbox \"5814ea07efcaa12b52a90e596f15aabff34eaa520f1d79c30fe4d18554564650\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7b59aa3966b7b583c4064707195aa427b0435d5edbe767c4ad5fc45bb33765bc\"" Jul 14 22:37:57.749787 containerd[1471]: time="2025-07-14T22:37:57.749698812Z" level=info msg="StartContainer for \"7b59aa3966b7b583c4064707195aa427b0435d5edbe767c4ad5fc45bb33765bc\"" Jul 14 22:37:57.788784 systemd[1]: Started cri-containerd-7b59aa3966b7b583c4064707195aa427b0435d5edbe767c4ad5fc45bb33765bc.scope - libcontainer container 7b59aa3966b7b583c4064707195aa427b0435d5edbe767c4ad5fc45bb33765bc. Jul 14 22:37:57.824626 containerd[1471]: time="2025-07-14T22:37:57.824567733Z" level=info msg="StartContainer for \"7b59aa3966b7b583c4064707195aa427b0435d5edbe767c4ad5fc45bb33765bc\" returns successfully" Jul 14 22:38:04.131837 sudo[1647]: pam_unix(sudo:session): session closed for user root Jul 14 22:38:04.134724 sshd[1644]: pam_unix(sshd:session): session closed for user core Jul 14 22:38:04.140338 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:38284.service: Deactivated successfully. Jul 14 22:38:04.146113 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:38:04.146811 systemd[1]: session-7.scope: Consumed 6.474s CPU time, 162.1M memory peak, 0B memory swap peak. Jul 14 22:38:04.147617 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:38:04.151376 systemd-logind[1452]: Removed session 7. Jul 14 22:38:07.442393 kubelet[2604]: I0714 22:38:07.442288 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-8ngcn" podStartSLOduration=10.649118125 podStartE2EDuration="14.442268165s" podCreationTimestamp="2025-07-14 22:37:53 +0000 UTC" firstStartedPulling="2025-07-14 22:37:53.920145012 +0000 UTC m=+13.482449648" lastFinishedPulling="2025-07-14 22:37:57.713295052 +0000 UTC m=+17.275599688" observedRunningTime="2025-07-14 22:37:58.8229358 +0000 UTC m=+18.385240436" watchObservedRunningTime="2025-07-14 22:38:07.442268165 +0000 UTC m=+27.004572801" Jul 14 22:38:07.454266 systemd[1]: Created slice kubepods-besteffort-pod8a01de9e_a2f4_4bf6_9224_d080757b4066.slice - libcontainer container kubepods-besteffort-pod8a01de9e_a2f4_4bf6_9224_d080757b4066.slice. Jul 14 22:38:07.481909 kubelet[2604]: I0714 22:38:07.481773 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8a01de9e-a2f4-4bf6-9224-d080757b4066-typha-certs\") pod \"calico-typha-6c4dcd5599-87cdq\" (UID: \"8a01de9e-a2f4-4bf6-9224-d080757b4066\") " pod="calico-system/calico-typha-6c4dcd5599-87cdq" Jul 14 22:38:07.482252 kubelet[2604]: I0714 22:38:07.482139 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8a01de9e-a2f4-4bf6-9224-d080757b4066-tigera-ca-bundle\") pod \"calico-typha-6c4dcd5599-87cdq\" (UID: \"8a01de9e-a2f4-4bf6-9224-d080757b4066\") " pod="calico-system/calico-typha-6c4dcd5599-87cdq" Jul 14 22:38:07.482252 kubelet[2604]: I0714 22:38:07.482224 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qrsv\" (UniqueName: \"kubernetes.io/projected/8a01de9e-a2f4-4bf6-9224-d080757b4066-kube-api-access-7qrsv\") pod \"calico-typha-6c4dcd5599-87cdq\" (UID: \"8a01de9e-a2f4-4bf6-9224-d080757b4066\") " pod="calico-system/calico-typha-6c4dcd5599-87cdq" Jul 14 22:38:07.515206 systemd[1]: Created slice kubepods-besteffort-pod18d4bf30_41d5_4060_8e47_fcec72a88269.slice - libcontainer container kubepods-besteffort-pod18d4bf30_41d5_4060_8e47_fcec72a88269.slice. Jul 14 22:38:07.583174 kubelet[2604]: I0714 22:38:07.583118 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/18d4bf30-41d5-4060-8e47-fcec72a88269-cni-log-dir\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.583174 kubelet[2604]: I0714 22:38:07.583162 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/18d4bf30-41d5-4060-8e47-fcec72a88269-flexvol-driver-host\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.583352 kubelet[2604]: I0714 22:38:07.583186 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18d4bf30-41d5-4060-8e47-fcec72a88269-tigera-ca-bundle\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.583352 kubelet[2604]: I0714 22:38:07.583206 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/18d4bf30-41d5-4060-8e47-fcec72a88269-var-lib-calico\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.583352 kubelet[2604]: I0714 22:38:07.583267 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/18d4bf30-41d5-4060-8e47-fcec72a88269-xtables-lock\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.583352 kubelet[2604]: I0714 22:38:07.583312 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djkz2\" (UniqueName: \"kubernetes.io/projected/18d4bf30-41d5-4060-8e47-fcec72a88269-kube-api-access-djkz2\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.583352 kubelet[2604]: I0714 22:38:07.583337 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/18d4bf30-41d5-4060-8e47-fcec72a88269-lib-modules\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.583483 kubelet[2604]: I0714 22:38:07.583352 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/18d4bf30-41d5-4060-8e47-fcec72a88269-cni-bin-dir\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.583483 kubelet[2604]: I0714 22:38:07.583367 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/18d4bf30-41d5-4060-8e47-fcec72a88269-cni-net-dir\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.583483 kubelet[2604]: I0714 22:38:07.583394 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/18d4bf30-41d5-4060-8e47-fcec72a88269-var-run-calico\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.583483 kubelet[2604]: I0714 22:38:07.583413 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/18d4bf30-41d5-4060-8e47-fcec72a88269-node-certs\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.583483 kubelet[2604]: I0714 22:38:07.583429 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/18d4bf30-41d5-4060-8e47-fcec72a88269-policysync\") pod \"calico-node-np8k4\" (UID: \"18d4bf30-41d5-4060-8e47-fcec72a88269\") " pod="calico-system/calico-node-np8k4" Jul 14 22:38:07.688325 kubelet[2604]: E0714 22:38:07.688197 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.688325 kubelet[2604]: W0714 22:38:07.688241 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.688325 kubelet[2604]: E0714 22:38:07.688276 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.693630 kubelet[2604]: E0714 22:38:07.693418 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.693630 kubelet[2604]: W0714 22:38:07.693459 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.693630 kubelet[2604]: E0714 22:38:07.693483 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.695144 kubelet[2604]: E0714 22:38:07.695102 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.695199 kubelet[2604]: W0714 22:38:07.695159 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.695199 kubelet[2604]: E0714 22:38:07.695174 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.758123 kubelet[2604]: E0714 22:38:07.758065 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:07.759742 containerd[1471]: time="2025-07-14T22:38:07.759698353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c4dcd5599-87cdq,Uid:8a01de9e-a2f4-4bf6-9224-d080757b4066,Namespace:calico-system,Attempt:0,}" Jul 14 22:38:07.760143 kubelet[2604]: E0714 22:38:07.759791 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:07.762008 kubelet[2604]: E0714 22:38:07.761982 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.762008 kubelet[2604]: W0714 22:38:07.762004 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.762113 kubelet[2604]: E0714 22:38:07.762025 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.763661 kubelet[2604]: E0714 22:38:07.763643 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.763661 kubelet[2604]: W0714 22:38:07.763656 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.763761 kubelet[2604]: E0714 22:38:07.763667 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.763977 kubelet[2604]: E0714 22:38:07.763958 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.763977 kubelet[2604]: W0714 22:38:07.763970 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.763977 kubelet[2604]: E0714 22:38:07.763978 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.764656 kubelet[2604]: E0714 22:38:07.764603 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.764656 kubelet[2604]: W0714 22:38:07.764622 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.764656 kubelet[2604]: E0714 22:38:07.764642 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.765022 kubelet[2604]: E0714 22:38:07.765004 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.765276 kubelet[2604]: W0714 22:38:07.765103 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.765276 kubelet[2604]: E0714 22:38:07.765121 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.765585 kubelet[2604]: E0714 22:38:07.765561 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.765585 kubelet[2604]: W0714 22:38:07.765578 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.765679 kubelet[2604]: E0714 22:38:07.765591 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.765928 kubelet[2604]: E0714 22:38:07.765878 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.765928 kubelet[2604]: W0714 22:38:07.765907 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.765928 kubelet[2604]: E0714 22:38:07.765921 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.766249 kubelet[2604]: E0714 22:38:07.766198 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.766249 kubelet[2604]: W0714 22:38:07.766225 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.766249 kubelet[2604]: E0714 22:38:07.766239 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.766625 kubelet[2604]: E0714 22:38:07.766531 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.766625 kubelet[2604]: W0714 22:38:07.766548 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.766625 kubelet[2604]: E0714 22:38:07.766566 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.767023 kubelet[2604]: E0714 22:38:07.766855 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.767023 kubelet[2604]: W0714 22:38:07.766869 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.767023 kubelet[2604]: E0714 22:38:07.766880 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.767190 kubelet[2604]: E0714 22:38:07.767172 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.767190 kubelet[2604]: W0714 22:38:07.767187 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.767274 kubelet[2604]: E0714 22:38:07.767198 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.767435 kubelet[2604]: E0714 22:38:07.767419 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.767435 kubelet[2604]: W0714 22:38:07.767431 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.767526 kubelet[2604]: E0714 22:38:07.767439 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.767735 kubelet[2604]: E0714 22:38:07.767706 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.767735 kubelet[2604]: W0714 22:38:07.767722 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.767735 kubelet[2604]: E0714 22:38:07.767733 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.768032 kubelet[2604]: E0714 22:38:07.768008 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.768032 kubelet[2604]: W0714 22:38:07.768024 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.768101 kubelet[2604]: E0714 22:38:07.768034 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.768280 kubelet[2604]: E0714 22:38:07.768257 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.768280 kubelet[2604]: W0714 22:38:07.768271 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.768501 kubelet[2604]: E0714 22:38:07.768289 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.768577 kubelet[2604]: E0714 22:38:07.768556 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.768577 kubelet[2604]: W0714 22:38:07.768571 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.768756 kubelet[2604]: E0714 22:38:07.768581 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.769371 kubelet[2604]: E0714 22:38:07.769224 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.769371 kubelet[2604]: W0714 22:38:07.769243 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.769371 kubelet[2604]: E0714 22:38:07.769257 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.771197 kubelet[2604]: E0714 22:38:07.771157 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.771197 kubelet[2604]: W0714 22:38:07.771182 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.771419 kubelet[2604]: E0714 22:38:07.771202 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.771585 kubelet[2604]: E0714 22:38:07.771490 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.771585 kubelet[2604]: W0714 22:38:07.771499 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.771585 kubelet[2604]: E0714 22:38:07.771509 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.771931 kubelet[2604]: E0714 22:38:07.771915 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.771931 kubelet[2604]: W0714 22:38:07.771927 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.772002 kubelet[2604]: E0714 22:38:07.771939 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.785402 kubelet[2604]: E0714 22:38:07.785354 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.785402 kubelet[2604]: W0714 22:38:07.785401 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.785583 kubelet[2604]: E0714 22:38:07.785432 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.785583 kubelet[2604]: I0714 22:38:07.785498 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/12e54208-f5d7-4225-a878-cbfd7ce81981-varrun\") pod \"csi-node-driver-rnmm5\" (UID: \"12e54208-f5d7-4225-a878-cbfd7ce81981\") " pod="calico-system/csi-node-driver-rnmm5" Jul 14 22:38:07.786639 kubelet[2604]: E0714 22:38:07.785975 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.786639 kubelet[2604]: W0714 22:38:07.786019 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.786639 kubelet[2604]: E0714 22:38:07.786031 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.786639 kubelet[2604]: I0714 22:38:07.786085 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12e54208-f5d7-4225-a878-cbfd7ce81981-kubelet-dir\") pod \"csi-node-driver-rnmm5\" (UID: \"12e54208-f5d7-4225-a878-cbfd7ce81981\") " pod="calico-system/csi-node-driver-rnmm5" Jul 14 22:38:07.786639 kubelet[2604]: E0714 22:38:07.786516 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.786639 kubelet[2604]: W0714 22:38:07.786531 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.786639 kubelet[2604]: E0714 22:38:07.786544 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.787018 kubelet[2604]: E0714 22:38:07.786813 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.787018 kubelet[2604]: W0714 22:38:07.786826 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.787018 kubelet[2604]: E0714 22:38:07.786837 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.787161 kubelet[2604]: E0714 22:38:07.787092 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.787161 kubelet[2604]: W0714 22:38:07.787103 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.787161 kubelet[2604]: E0714 22:38:07.787113 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.787161 kubelet[2604]: I0714 22:38:07.787136 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/12e54208-f5d7-4225-a878-cbfd7ce81981-socket-dir\") pod \"csi-node-driver-rnmm5\" (UID: \"12e54208-f5d7-4225-a878-cbfd7ce81981\") " pod="calico-system/csi-node-driver-rnmm5" Jul 14 22:38:07.787495 kubelet[2604]: E0714 22:38:07.787381 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.787495 kubelet[2604]: W0714 22:38:07.787395 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.787495 kubelet[2604]: E0714 22:38:07.787407 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.787704 kubelet[2604]: E0714 22:38:07.787683 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.787704 kubelet[2604]: W0714 22:38:07.787703 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.787770 kubelet[2604]: E0714 22:38:07.787715 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.788062 kubelet[2604]: E0714 22:38:07.788044 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.788062 kubelet[2604]: W0714 22:38:07.788059 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.788135 kubelet[2604]: E0714 22:38:07.788071 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.788135 kubelet[2604]: I0714 22:38:07.788096 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwg9h\" (UniqueName: \"kubernetes.io/projected/12e54208-f5d7-4225-a878-cbfd7ce81981-kube-api-access-hwg9h\") pod \"csi-node-driver-rnmm5\" (UID: \"12e54208-f5d7-4225-a878-cbfd7ce81981\") " pod="calico-system/csi-node-driver-rnmm5" Jul 14 22:38:07.789349 kubelet[2604]: E0714 22:38:07.789017 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.789349 kubelet[2604]: W0714 22:38:07.789033 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.789349 kubelet[2604]: E0714 22:38:07.789042 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.789349 kubelet[2604]: I0714 22:38:07.789065 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/12e54208-f5d7-4225-a878-cbfd7ce81981-registration-dir\") pod \"csi-node-driver-rnmm5\" (UID: \"12e54208-f5d7-4225-a878-cbfd7ce81981\") " pod="calico-system/csi-node-driver-rnmm5" Jul 14 22:38:07.789349 kubelet[2604]: E0714 22:38:07.789318 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.789349 kubelet[2604]: W0714 22:38:07.789330 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.789349 kubelet[2604]: E0714 22:38:07.789341 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.789648 kubelet[2604]: E0714 22:38:07.789587 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.789648 kubelet[2604]: W0714 22:38:07.789598 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.789648 kubelet[2604]: E0714 22:38:07.789609 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.789845 kubelet[2604]: E0714 22:38:07.789813 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.789845 kubelet[2604]: W0714 22:38:07.789828 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.789845 kubelet[2604]: E0714 22:38:07.789840 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.790076 kubelet[2604]: E0714 22:38:07.790057 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.790076 kubelet[2604]: W0714 22:38:07.790072 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.790160 kubelet[2604]: E0714 22:38:07.790082 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.790299 kubelet[2604]: E0714 22:38:07.790268 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.790299 kubelet[2604]: W0714 22:38:07.790282 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.790299 kubelet[2604]: E0714 22:38:07.790293 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.790524 kubelet[2604]: E0714 22:38:07.790504 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.790524 kubelet[2604]: W0714 22:38:07.790518 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.790524 kubelet[2604]: E0714 22:38:07.790532 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.797820 containerd[1471]: time="2025-07-14T22:38:07.797700198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:38:07.797820 containerd[1471]: time="2025-07-14T22:38:07.797762788Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:38:07.797820 containerd[1471]: time="2025-07-14T22:38:07.797775572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:07.797974 containerd[1471]: time="2025-07-14T22:38:07.797867018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:07.819956 containerd[1471]: time="2025-07-14T22:38:07.819912279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-np8k4,Uid:18d4bf30-41d5-4060-8e47-fcec72a88269,Namespace:calico-system,Attempt:0,}" Jul 14 22:38:07.820744 systemd[1]: Started cri-containerd-ecf0b45fa58154e1b9575d0546bdacfa8cfa2f885cfe7550b31fa5d568d886e8.scope - libcontainer container ecf0b45fa58154e1b9575d0546bdacfa8cfa2f885cfe7550b31fa5d568d886e8. Jul 14 22:38:07.887042 containerd[1471]: time="2025-07-14T22:38:07.886977074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c4dcd5599-87cdq,Uid:8a01de9e-a2f4-4bf6-9224-d080757b4066,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecf0b45fa58154e1b9575d0546bdacfa8cfa2f885cfe7550b31fa5d568d886e8\"" Jul 14 22:38:07.890256 kubelet[2604]: E0714 22:38:07.890223 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:07.890790 kubelet[2604]: E0714 22:38:07.890753 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.891114 kubelet[2604]: W0714 22:38:07.891087 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.891153 kubelet[2604]: E0714 22:38:07.891118 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.891535 kubelet[2604]: E0714 22:38:07.891512 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.891535 kubelet[2604]: W0714 22:38:07.891529 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.891621 kubelet[2604]: E0714 22:38:07.891542 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.891818 kubelet[2604]: E0714 22:38:07.891799 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.891818 kubelet[2604]: W0714 22:38:07.891814 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.891902 kubelet[2604]: E0714 22:38:07.891826 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.892144 kubelet[2604]: E0714 22:38:07.892127 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.892144 kubelet[2604]: W0714 22:38:07.892140 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.892220 kubelet[2604]: E0714 22:38:07.892150 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.892373 containerd[1471]: time="2025-07-14T22:38:07.892336667Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 22:38:07.892740 kubelet[2604]: E0714 22:38:07.892444 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.892740 kubelet[2604]: W0714 22:38:07.892726 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.892740 kubelet[2604]: E0714 22:38:07.892740 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.893852 kubelet[2604]: E0714 22:38:07.893806 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.893852 kubelet[2604]: W0714 22:38:07.893825 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.893852 kubelet[2604]: E0714 22:38:07.893836 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.896492 kubelet[2604]: E0714 22:38:07.894207 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.896492 kubelet[2604]: W0714 22:38:07.894224 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.896492 kubelet[2604]: E0714 22:38:07.894234 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.896492 kubelet[2604]: E0714 22:38:07.894743 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.896492 kubelet[2604]: W0714 22:38:07.894752 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.896492 kubelet[2604]: E0714 22:38:07.894762 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.896492 kubelet[2604]: E0714 22:38:07.895112 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.896492 kubelet[2604]: W0714 22:38:07.895121 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.896492 kubelet[2604]: E0714 22:38:07.895138 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.896492 kubelet[2604]: E0714 22:38:07.895418 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.896865 kubelet[2604]: W0714 22:38:07.895442 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.896865 kubelet[2604]: E0714 22:38:07.895478 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.896865 kubelet[2604]: E0714 22:38:07.895735 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.896865 kubelet[2604]: W0714 22:38:07.895743 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.896865 kubelet[2604]: E0714 22:38:07.895751 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.896865 kubelet[2604]: E0714 22:38:07.896045 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.896865 kubelet[2604]: W0714 22:38:07.896057 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.896865 kubelet[2604]: E0714 22:38:07.896072 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.896865 kubelet[2604]: E0714 22:38:07.896469 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.896865 kubelet[2604]: W0714 22:38:07.896481 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.897122 kubelet[2604]: E0714 22:38:07.896492 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.897122 kubelet[2604]: E0714 22:38:07.897049 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.897122 kubelet[2604]: W0714 22:38:07.897078 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.897122 kubelet[2604]: E0714 22:38:07.897092 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.898771 kubelet[2604]: E0714 22:38:07.897375 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.898771 kubelet[2604]: W0714 22:38:07.897397 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.898771 kubelet[2604]: E0714 22:38:07.897409 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.898771 kubelet[2604]: E0714 22:38:07.897868 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.898771 kubelet[2604]: W0714 22:38:07.897904 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.898771 kubelet[2604]: E0714 22:38:07.898041 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.898771 kubelet[2604]: E0714 22:38:07.898648 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.898771 kubelet[2604]: W0714 22:38:07.898660 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.898771 kubelet[2604]: E0714 22:38:07.898672 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.899295 kubelet[2604]: E0714 22:38:07.899270 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.899295 kubelet[2604]: W0714 22:38:07.899289 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.899383 kubelet[2604]: E0714 22:38:07.899302 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.899606 kubelet[2604]: E0714 22:38:07.899580 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.899606 kubelet[2604]: W0714 22:38:07.899595 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.899606 kubelet[2604]: E0714 22:38:07.899607 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.900088 kubelet[2604]: E0714 22:38:07.900069 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.900088 kubelet[2604]: W0714 22:38:07.900085 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.900155 kubelet[2604]: E0714 22:38:07.900110 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.900533 kubelet[2604]: E0714 22:38:07.900498 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.900533 kubelet[2604]: W0714 22:38:07.900527 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.900593 kubelet[2604]: E0714 22:38:07.900539 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.900827 kubelet[2604]: E0714 22:38:07.900805 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.900827 kubelet[2604]: W0714 22:38:07.900820 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.900949 kubelet[2604]: E0714 22:38:07.900831 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.901232 kubelet[2604]: E0714 22:38:07.901194 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.901232 kubelet[2604]: W0714 22:38:07.901212 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.901232 kubelet[2604]: E0714 22:38:07.901224 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.901614 kubelet[2604]: E0714 22:38:07.901589 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.901662 kubelet[2604]: W0714 22:38:07.901647 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.901838 kubelet[2604]: E0714 22:38:07.901664 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.902630 kubelet[2604]: E0714 22:38:07.902600 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.902630 kubelet[2604]: W0714 22:38:07.902620 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.902725 kubelet[2604]: E0714 22:38:07.902633 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.905630 containerd[1471]: time="2025-07-14T22:38:07.904899762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:38:07.905630 containerd[1471]: time="2025-07-14T22:38:07.905470996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:38:07.905630 containerd[1471]: time="2025-07-14T22:38:07.905515270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:07.905878 containerd[1471]: time="2025-07-14T22:38:07.905827448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:07.925086 kubelet[2604]: E0714 22:38:07.924977 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:07.925086 kubelet[2604]: W0714 22:38:07.925000 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:07.925086 kubelet[2604]: E0714 22:38:07.925020 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:07.945773 systemd[1]: Started cri-containerd-0b046173b79d104a0ca4d15b14df846a26bf1ef3cc8b9534c502bc2402b4a901.scope - libcontainer container 0b046173b79d104a0ca4d15b14df846a26bf1ef3cc8b9534c502bc2402b4a901. Jul 14 22:38:07.987149 containerd[1471]: time="2025-07-14T22:38:07.987077283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-np8k4,Uid:18d4bf30-41d5-4060-8e47-fcec72a88269,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b046173b79d104a0ca4d15b14df846a26bf1ef3cc8b9534c502bc2402b4a901\"" Jul 14 22:38:09.521641 kubelet[2604]: E0714 22:38:09.521580 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:11.521824 kubelet[2604]: E0714 22:38:11.521749 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:13.522200 kubelet[2604]: E0714 22:38:13.522122 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:13.626826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2289188383.mount: Deactivated successfully. Jul 14 22:38:15.522060 kubelet[2604]: E0714 22:38:15.521991 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:16.495271 containerd[1471]: time="2025-07-14T22:38:16.495204345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:16.586010 containerd[1471]: time="2025-07-14T22:38:16.585914037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 14 22:38:16.663836 containerd[1471]: time="2025-07-14T22:38:16.663775393Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:16.815187 containerd[1471]: time="2025-07-14T22:38:16.815129853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:16.815926 containerd[1471]: time="2025-07-14T22:38:16.815891887Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 8.923509923s" Jul 14 22:38:16.815926 containerd[1471]: time="2025-07-14T22:38:16.815925611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 14 22:38:16.817037 containerd[1471]: time="2025-07-14T22:38:16.816837910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 22:38:17.403206 containerd[1471]: time="2025-07-14T22:38:17.403128667Z" level=info msg="CreateContainer within sandbox \"ecf0b45fa58154e1b9575d0546bdacfa8cfa2f885cfe7550b31fa5d568d886e8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 22:38:17.521582 kubelet[2604]: E0714 22:38:17.521524 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:18.620360 containerd[1471]: time="2025-07-14T22:38:18.620306826Z" level=info msg="CreateContainer within sandbox \"ecf0b45fa58154e1b9575d0546bdacfa8cfa2f885cfe7550b31fa5d568d886e8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c7ebab1fa2d85f404c8814cb7bd4b07072c3b099460c3a3135c9033da04602e8\"" Jul 14 22:38:18.620839 containerd[1471]: time="2025-07-14T22:38:18.620805947Z" level=info msg="StartContainer for \"c7ebab1fa2d85f404c8814cb7bd4b07072c3b099460c3a3135c9033da04602e8\"" Jul 14 22:38:18.659729 systemd[1]: Started cri-containerd-c7ebab1fa2d85f404c8814cb7bd4b07072c3b099460c3a3135c9033da04602e8.scope - libcontainer container c7ebab1fa2d85f404c8814cb7bd4b07072c3b099460c3a3135c9033da04602e8. Jul 14 22:38:18.872069 containerd[1471]: time="2025-07-14T22:38:18.871882264Z" level=info msg="StartContainer for \"c7ebab1fa2d85f404c8814cb7bd4b07072c3b099460c3a3135c9033da04602e8\" returns successfully" Jul 14 22:38:19.523297 kubelet[2604]: E0714 22:38:19.523229 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:19.629331 kubelet[2604]: E0714 22:38:19.629250 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:19.639851 kubelet[2604]: E0714 22:38:19.639767 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.639851 kubelet[2604]: W0714 22:38:19.639805 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.639851 kubelet[2604]: E0714 22:38:19.639835 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.640266 kubelet[2604]: E0714 22:38:19.640233 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.640266 kubelet[2604]: W0714 22:38:19.640257 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.640343 kubelet[2604]: E0714 22:38:19.640287 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.640639 kubelet[2604]: E0714 22:38:19.640619 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.640639 kubelet[2604]: W0714 22:38:19.640632 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.640727 kubelet[2604]: E0714 22:38:19.640643 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.641001 kubelet[2604]: E0714 22:38:19.640960 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.641001 kubelet[2604]: W0714 22:38:19.640977 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.641001 kubelet[2604]: E0714 22:38:19.640992 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.641259 kubelet[2604]: E0714 22:38:19.641239 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.641259 kubelet[2604]: W0714 22:38:19.641254 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.641316 kubelet[2604]: E0714 22:38:19.641266 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.641554 kubelet[2604]: E0714 22:38:19.641529 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.641554 kubelet[2604]: W0714 22:38:19.641546 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.641554 kubelet[2604]: E0714 22:38:19.641559 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.641824 kubelet[2604]: E0714 22:38:19.641793 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.641824 kubelet[2604]: W0714 22:38:19.641806 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.641824 kubelet[2604]: E0714 22:38:19.641817 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.642336 kubelet[2604]: E0714 22:38:19.642062 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.642336 kubelet[2604]: W0714 22:38:19.642074 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.642336 kubelet[2604]: E0714 22:38:19.642084 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.642336 kubelet[2604]: E0714 22:38:19.642304 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.642336 kubelet[2604]: W0714 22:38:19.642314 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.642336 kubelet[2604]: E0714 22:38:19.642325 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.643638 kubelet[2604]: E0714 22:38:19.642573 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.643638 kubelet[2604]: W0714 22:38:19.642584 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.643638 kubelet[2604]: E0714 22:38:19.642596 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.643638 kubelet[2604]: E0714 22:38:19.642839 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.643638 kubelet[2604]: W0714 22:38:19.642850 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.643638 kubelet[2604]: E0714 22:38:19.642897 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.643638 kubelet[2604]: E0714 22:38:19.643194 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.643638 kubelet[2604]: W0714 22:38:19.643206 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.643638 kubelet[2604]: E0714 22:38:19.643217 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.643638 kubelet[2604]: E0714 22:38:19.643529 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.643960 containerd[1471]: time="2025-07-14T22:38:19.643584008Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:19.644385 kubelet[2604]: W0714 22:38:19.643540 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.644385 kubelet[2604]: E0714 22:38:19.643550 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.644385 kubelet[2604]: E0714 22:38:19.643789 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.644385 kubelet[2604]: W0714 22:38:19.643799 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.644385 kubelet[2604]: E0714 22:38:19.643809 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.644385 kubelet[2604]: E0714 22:38:19.644079 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.644385 kubelet[2604]: W0714 22:38:19.644091 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.644385 kubelet[2604]: E0714 22:38:19.644102 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.646613 containerd[1471]: time="2025-07-14T22:38:19.644637806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 14 22:38:19.646613 containerd[1471]: time="2025-07-14T22:38:19.646166156Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:19.649323 containerd[1471]: time="2025-07-14T22:38:19.649271169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:19.650024 containerd[1471]: time="2025-07-14T22:38:19.649981181Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 2.83311088s" Jul 14 22:38:19.650024 containerd[1471]: time="2025-07-14T22:38:19.650016479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 14 22:38:19.670043 kubelet[2604]: E0714 22:38:19.670001 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.670043 kubelet[2604]: W0714 22:38:19.670022 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.670043 kubelet[2604]: E0714 22:38:19.670044 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.670461 kubelet[2604]: E0714 22:38:19.670414 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.670461 kubelet[2604]: W0714 22:38:19.670437 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.670572 kubelet[2604]: E0714 22:38:19.670474 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.671083 kubelet[2604]: E0714 22:38:19.671051 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.671083 kubelet[2604]: W0714 22:38:19.671067 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.671083 kubelet[2604]: E0714 22:38:19.671078 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.671395 kubelet[2604]: E0714 22:38:19.671378 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.671395 kubelet[2604]: W0714 22:38:19.671391 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.671492 kubelet[2604]: E0714 22:38:19.671402 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.671685 kubelet[2604]: E0714 22:38:19.671668 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.671685 kubelet[2604]: W0714 22:38:19.671682 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.671748 kubelet[2604]: E0714 22:38:19.671692 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.672013 kubelet[2604]: E0714 22:38:19.671990 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.672013 kubelet[2604]: W0714 22:38:19.672006 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.672123 kubelet[2604]: E0714 22:38:19.672020 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.672332 kubelet[2604]: E0714 22:38:19.672310 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.672332 kubelet[2604]: W0714 22:38:19.672326 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.672410 kubelet[2604]: E0714 22:38:19.672339 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.672620 kubelet[2604]: E0714 22:38:19.672602 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.672620 kubelet[2604]: W0714 22:38:19.672617 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.672702 kubelet[2604]: E0714 22:38:19.672628 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.672934 kubelet[2604]: E0714 22:38:19.672912 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.672934 kubelet[2604]: W0714 22:38:19.672926 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.673040 kubelet[2604]: E0714 22:38:19.672937 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.673407 kubelet[2604]: E0714 22:38:19.673384 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.673407 kubelet[2604]: W0714 22:38:19.673399 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.675915 kubelet[2604]: E0714 22:38:19.673411 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.675915 kubelet[2604]: E0714 22:38:19.673725 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.675915 kubelet[2604]: W0714 22:38:19.673738 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.675915 kubelet[2604]: E0714 22:38:19.673747 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.675915 kubelet[2604]: E0714 22:38:19.674310 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.675915 kubelet[2604]: W0714 22:38:19.674341 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.675915 kubelet[2604]: E0714 22:38:19.674366 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.675915 kubelet[2604]: E0714 22:38:19.674748 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.675915 kubelet[2604]: W0714 22:38:19.674758 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.675915 kubelet[2604]: E0714 22:38:19.674768 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.676679 kubelet[2604]: E0714 22:38:19.675050 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.676679 kubelet[2604]: W0714 22:38:19.675061 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.676679 kubelet[2604]: E0714 22:38:19.675075 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.676679 kubelet[2604]: E0714 22:38:19.675324 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.676679 kubelet[2604]: W0714 22:38:19.675335 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.676679 kubelet[2604]: E0714 22:38:19.675344 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.676679 kubelet[2604]: E0714 22:38:19.675626 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.676679 kubelet[2604]: W0714 22:38:19.675637 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.676679 kubelet[2604]: E0714 22:38:19.675647 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.676679 kubelet[2604]: E0714 22:38:19.675944 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.677506 kubelet[2604]: W0714 22:38:19.675970 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.677506 kubelet[2604]: E0714 22:38:19.675982 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:19.677506 kubelet[2604]: E0714 22:38:19.676672 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:19.677506 kubelet[2604]: W0714 22:38:19.676705 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:19.677506 kubelet[2604]: E0714 22:38:19.676731 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.755807 containerd[1471]: time="2025-07-14T22:38:20.755738911Z" level=info msg="CreateContainer within sandbox \"0b046173b79d104a0ca4d15b14df846a26bf1ef3cc8b9534c502bc2402b4a901\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 22:38:20.814319 kubelet[2604]: I0714 22:38:20.814241 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:38:20.814959 kubelet[2604]: E0714 22:38:20.814917 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:20.852257 kubelet[2604]: E0714 22:38:20.852214 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.852257 kubelet[2604]: W0714 22:38:20.852236 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.852257 kubelet[2604]: E0714 22:38:20.852254 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.852549 kubelet[2604]: E0714 22:38:20.852522 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.852549 kubelet[2604]: W0714 22:38:20.852536 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.852549 kubelet[2604]: E0714 22:38:20.852545 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.852867 kubelet[2604]: E0714 22:38:20.852838 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.852867 kubelet[2604]: W0714 22:38:20.852850 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.852867 kubelet[2604]: E0714 22:38:20.852857 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.853128 kubelet[2604]: E0714 22:38:20.853100 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.853128 kubelet[2604]: W0714 22:38:20.853112 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.853128 kubelet[2604]: E0714 22:38:20.853121 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.853356 kubelet[2604]: E0714 22:38:20.853328 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.853356 kubelet[2604]: W0714 22:38:20.853341 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.853356 kubelet[2604]: E0714 22:38:20.853349 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.853602 kubelet[2604]: E0714 22:38:20.853576 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.853602 kubelet[2604]: W0714 22:38:20.853589 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.853602 kubelet[2604]: E0714 22:38:20.853597 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.853803 kubelet[2604]: E0714 22:38:20.853777 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.853803 kubelet[2604]: W0714 22:38:20.853788 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.853803 kubelet[2604]: E0714 22:38:20.853796 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.854018 kubelet[2604]: E0714 22:38:20.853999 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.854018 kubelet[2604]: W0714 22:38:20.854010 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.854018 kubelet[2604]: E0714 22:38:20.854018 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.854245 kubelet[2604]: E0714 22:38:20.854227 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.854245 kubelet[2604]: W0714 22:38:20.854237 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.854245 kubelet[2604]: E0714 22:38:20.854245 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.854483 kubelet[2604]: E0714 22:38:20.854463 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.854483 kubelet[2604]: W0714 22:38:20.854474 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.854534 kubelet[2604]: E0714 22:38:20.854483 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.854707 kubelet[2604]: E0714 22:38:20.854689 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.854707 kubelet[2604]: W0714 22:38:20.854703 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.854758 kubelet[2604]: E0714 22:38:20.854713 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.854915 kubelet[2604]: E0714 22:38:20.854897 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.854915 kubelet[2604]: W0714 22:38:20.854907 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.854915 kubelet[2604]: E0714 22:38:20.854914 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.855130 kubelet[2604]: E0714 22:38:20.855108 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.855130 kubelet[2604]: W0714 22:38:20.855119 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.855130 kubelet[2604]: E0714 22:38:20.855127 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.855329 kubelet[2604]: E0714 22:38:20.855310 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.855329 kubelet[2604]: W0714 22:38:20.855320 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.855329 kubelet[2604]: E0714 22:38:20.855328 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.855594 kubelet[2604]: E0714 22:38:20.855566 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.855594 kubelet[2604]: W0714 22:38:20.855579 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.855594 kubelet[2604]: E0714 22:38:20.855586 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.880011 kubelet[2604]: E0714 22:38:20.879968 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.880011 kubelet[2604]: W0714 22:38:20.879995 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.880011 kubelet[2604]: E0714 22:38:20.880016 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.880304 kubelet[2604]: E0714 22:38:20.880272 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.880304 kubelet[2604]: W0714 22:38:20.880291 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.880304 kubelet[2604]: E0714 22:38:20.880304 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.880749 kubelet[2604]: E0714 22:38:20.880700 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.880749 kubelet[2604]: W0714 22:38:20.880729 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.880749 kubelet[2604]: E0714 22:38:20.880757 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.881176 kubelet[2604]: E0714 22:38:20.881156 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.881176 kubelet[2604]: W0714 22:38:20.881171 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.881281 kubelet[2604]: E0714 22:38:20.881182 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.881443 kubelet[2604]: E0714 22:38:20.881426 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.881443 kubelet[2604]: W0714 22:38:20.881439 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.881542 kubelet[2604]: E0714 22:38:20.881464 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.881697 kubelet[2604]: E0714 22:38:20.881678 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.881697 kubelet[2604]: W0714 22:38:20.881689 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.881697 kubelet[2604]: E0714 22:38:20.881697 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.881928 kubelet[2604]: E0714 22:38:20.881907 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.881928 kubelet[2604]: W0714 22:38:20.881926 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.882027 kubelet[2604]: E0714 22:38:20.881936 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.882174 kubelet[2604]: E0714 22:38:20.882156 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.882174 kubelet[2604]: W0714 22:38:20.882167 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.882174 kubelet[2604]: E0714 22:38:20.882175 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.882436 kubelet[2604]: E0714 22:38:20.882417 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.882436 kubelet[2604]: W0714 22:38:20.882429 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.882436 kubelet[2604]: E0714 22:38:20.882439 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.882750 kubelet[2604]: E0714 22:38:20.882729 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.882750 kubelet[2604]: W0714 22:38:20.882744 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.882835 kubelet[2604]: E0714 22:38:20.882754 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.883063 kubelet[2604]: E0714 22:38:20.883044 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.883063 kubelet[2604]: W0714 22:38:20.883055 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.883063 kubelet[2604]: E0714 22:38:20.883064 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.883344 kubelet[2604]: E0714 22:38:20.883317 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.883344 kubelet[2604]: W0714 22:38:20.883331 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.883422 kubelet[2604]: E0714 22:38:20.883351 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.883617 kubelet[2604]: E0714 22:38:20.883592 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.883617 kubelet[2604]: W0714 22:38:20.883604 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.883617 kubelet[2604]: E0714 22:38:20.883613 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.883831 kubelet[2604]: E0714 22:38:20.883809 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.883831 kubelet[2604]: W0714 22:38:20.883820 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.883831 kubelet[2604]: E0714 22:38:20.883829 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.884060 kubelet[2604]: E0714 22:38:20.884037 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.884060 kubelet[2604]: W0714 22:38:20.884059 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.884142 kubelet[2604]: E0714 22:38:20.884068 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.884302 kubelet[2604]: E0714 22:38:20.884278 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.884302 kubelet[2604]: W0714 22:38:20.884292 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.884302 kubelet[2604]: E0714 22:38:20.884302 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.884736 kubelet[2604]: E0714 22:38:20.884701 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.884736 kubelet[2604]: W0714 22:38:20.884722 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.884736 kubelet[2604]: E0714 22:38:20.884735 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:20.885048 kubelet[2604]: E0714 22:38:20.885025 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:20.885048 kubelet[2604]: W0714 22:38:20.885042 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:20.885126 kubelet[2604]: E0714 22:38:20.885056 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.522215 kubelet[2604]: E0714 22:38:21.522158 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:21.836847 kubelet[2604]: E0714 22:38:21.836683 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:21.860568 kubelet[2604]: E0714 22:38:21.860529 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.860568 kubelet[2604]: W0714 22:38:21.860555 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.860662 kubelet[2604]: E0714 22:38:21.860575 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.860793 kubelet[2604]: E0714 22:38:21.860772 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.860793 kubelet[2604]: W0714 22:38:21.860784 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.860793 kubelet[2604]: E0714 22:38:21.860793 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.860997 kubelet[2604]: E0714 22:38:21.860977 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.860997 kubelet[2604]: W0714 22:38:21.860988 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.860997 kubelet[2604]: E0714 22:38:21.860996 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.861183 kubelet[2604]: E0714 22:38:21.861164 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.861183 kubelet[2604]: W0714 22:38:21.861175 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.861183 kubelet[2604]: E0714 22:38:21.861183 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.861369 kubelet[2604]: E0714 22:38:21.861351 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.861369 kubelet[2604]: W0714 22:38:21.861361 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.861369 kubelet[2604]: E0714 22:38:21.861369 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.861563 kubelet[2604]: E0714 22:38:21.861544 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.861563 kubelet[2604]: W0714 22:38:21.861555 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.861563 kubelet[2604]: E0714 22:38:21.861563 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.861752 kubelet[2604]: E0714 22:38:21.861724 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.861752 kubelet[2604]: W0714 22:38:21.861736 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.861752 kubelet[2604]: E0714 22:38:21.861743 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.861927 kubelet[2604]: E0714 22:38:21.861909 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.861927 kubelet[2604]: W0714 22:38:21.861919 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.861927 kubelet[2604]: E0714 22:38:21.861926 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.862125 kubelet[2604]: E0714 22:38:21.862107 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.862125 kubelet[2604]: W0714 22:38:21.862117 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.862177 kubelet[2604]: E0714 22:38:21.862124 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.862313 kubelet[2604]: E0714 22:38:21.862295 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.862313 kubelet[2604]: W0714 22:38:21.862304 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.862313 kubelet[2604]: E0714 22:38:21.862312 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.862500 kubelet[2604]: E0714 22:38:21.862482 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.862500 kubelet[2604]: W0714 22:38:21.862492 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.862500 kubelet[2604]: E0714 22:38:21.862500 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.862676 kubelet[2604]: E0714 22:38:21.862658 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.862676 kubelet[2604]: W0714 22:38:21.862668 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.862676 kubelet[2604]: E0714 22:38:21.862676 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.862854 kubelet[2604]: E0714 22:38:21.862836 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.862854 kubelet[2604]: W0714 22:38:21.862846 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.862902 kubelet[2604]: E0714 22:38:21.862855 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.863061 kubelet[2604]: E0714 22:38:21.863037 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.863061 kubelet[2604]: W0714 22:38:21.863054 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.863103 kubelet[2604]: E0714 22:38:21.863065 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.863271 kubelet[2604]: E0714 22:38:21.863252 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.863271 kubelet[2604]: W0714 22:38:21.863262 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.863271 kubelet[2604]: E0714 22:38:21.863270 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.886644 kubelet[2604]: E0714 22:38:21.886613 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.886644 kubelet[2604]: W0714 22:38:21.886632 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.886644 kubelet[2604]: E0714 22:38:21.886644 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.886913 kubelet[2604]: E0714 22:38:21.886893 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.886913 kubelet[2604]: W0714 22:38:21.886908 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.886977 kubelet[2604]: E0714 22:38:21.886919 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.887265 kubelet[2604]: E0714 22:38:21.887245 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.887265 kubelet[2604]: W0714 22:38:21.887260 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.887325 kubelet[2604]: E0714 22:38:21.887272 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.887558 kubelet[2604]: E0714 22:38:21.887541 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.887558 kubelet[2604]: W0714 22:38:21.887554 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.887623 kubelet[2604]: E0714 22:38:21.887565 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.887776 kubelet[2604]: E0714 22:38:21.887761 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.887776 kubelet[2604]: W0714 22:38:21.887771 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.887819 kubelet[2604]: E0714 22:38:21.887780 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.887988 kubelet[2604]: E0714 22:38:21.887972 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.887988 kubelet[2604]: W0714 22:38:21.887984 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.888041 kubelet[2604]: E0714 22:38:21.887992 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.888229 kubelet[2604]: E0714 22:38:21.888212 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.888229 kubelet[2604]: W0714 22:38:21.888223 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.888284 kubelet[2604]: E0714 22:38:21.888232 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.888439 kubelet[2604]: E0714 22:38:21.888424 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.888439 kubelet[2604]: W0714 22:38:21.888435 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.888510 kubelet[2604]: E0714 22:38:21.888459 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.888670 kubelet[2604]: E0714 22:38:21.888655 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.888670 kubelet[2604]: W0714 22:38:21.888666 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.888716 kubelet[2604]: E0714 22:38:21.888674 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.888876 kubelet[2604]: E0714 22:38:21.888858 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.888876 kubelet[2604]: W0714 22:38:21.888869 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.888926 kubelet[2604]: E0714 22:38:21.888879 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.889077 kubelet[2604]: E0714 22:38:21.889061 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.889077 kubelet[2604]: W0714 22:38:21.889072 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.889119 kubelet[2604]: E0714 22:38:21.889079 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.889302 kubelet[2604]: E0714 22:38:21.889285 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.889302 kubelet[2604]: W0714 22:38:21.889297 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.889366 kubelet[2604]: E0714 22:38:21.889306 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.889646 kubelet[2604]: E0714 22:38:21.889626 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.889646 kubelet[2604]: W0714 22:38:21.889642 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.889697 kubelet[2604]: E0714 22:38:21.889654 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.889940 kubelet[2604]: E0714 22:38:21.889919 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.889940 kubelet[2604]: W0714 22:38:21.889937 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.890032 kubelet[2604]: E0714 22:38:21.889962 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.890259 kubelet[2604]: E0714 22:38:21.890241 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.890259 kubelet[2604]: W0714 22:38:21.890256 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.890310 kubelet[2604]: E0714 22:38:21.890266 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.890563 kubelet[2604]: E0714 22:38:21.890545 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.890563 kubelet[2604]: W0714 22:38:21.890559 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.890621 kubelet[2604]: E0714 22:38:21.890570 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.890927 kubelet[2604]: E0714 22:38:21.890905 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.890927 kubelet[2604]: W0714 22:38:21.890924 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.891000 kubelet[2604]: E0714 22:38:21.890938 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:21.891197 kubelet[2604]: E0714 22:38:21.891179 2604 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:38:21.891197 kubelet[2604]: W0714 22:38:21.891194 2604 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:38:21.891249 kubelet[2604]: E0714 22:38:21.891205 2604 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:38:22.269781 kubelet[2604]: I0714 22:38:22.269713 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c4dcd5599-87cdq" podStartSLOduration=6.344575706 podStartE2EDuration="15.269696998s" podCreationTimestamp="2025-07-14 22:38:07 +0000 UTC" firstStartedPulling="2025-07-14 22:38:07.891544079 +0000 UTC m=+27.453848715" lastFinishedPulling="2025-07-14 22:38:16.816665371 +0000 UTC m=+36.378970007" observedRunningTime="2025-07-14 22:38:19.646589593 +0000 UTC m=+39.208894229" watchObservedRunningTime="2025-07-14 22:38:22.269696998 +0000 UTC m=+41.832001634" Jul 14 22:38:22.508329 containerd[1471]: time="2025-07-14T22:38:22.508278014Z" level=info msg="CreateContainer within sandbox \"0b046173b79d104a0ca4d15b14df846a26bf1ef3cc8b9534c502bc2402b4a901\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b3af98e9fa7f30edf500eb25099479b2ce07af03f08dad9fce3a169dd5d8014d\"" Jul 14 22:38:22.509277 containerd[1471]: time="2025-07-14T22:38:22.509233853Z" level=info msg="StartContainer for \"b3af98e9fa7f30edf500eb25099479b2ce07af03f08dad9fce3a169dd5d8014d\"" Jul 14 22:38:22.543644 systemd[1]: Started cri-containerd-b3af98e9fa7f30edf500eb25099479b2ce07af03f08dad9fce3a169dd5d8014d.scope - libcontainer container b3af98e9fa7f30edf500eb25099479b2ce07af03f08dad9fce3a169dd5d8014d. Jul 14 22:38:22.589192 systemd[1]: cri-containerd-b3af98e9fa7f30edf500eb25099479b2ce07af03f08dad9fce3a169dd5d8014d.scope: Deactivated successfully. Jul 14 22:38:22.658261 containerd[1471]: time="2025-07-14T22:38:22.658191630Z" level=info msg="StartContainer for \"b3af98e9fa7f30edf500eb25099479b2ce07af03f08dad9fce3a169dd5d8014d\" returns successfully" Jul 14 22:38:22.682123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3af98e9fa7f30edf500eb25099479b2ce07af03f08dad9fce3a169dd5d8014d-rootfs.mount: Deactivated successfully. Jul 14 22:38:23.521606 kubelet[2604]: E0714 22:38:23.521541 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:23.823145 containerd[1471]: time="2025-07-14T22:38:23.820149148Z" level=info msg="shim disconnected" id=b3af98e9fa7f30edf500eb25099479b2ce07af03f08dad9fce3a169dd5d8014d namespace=k8s.io Jul 14 22:38:23.823145 containerd[1471]: time="2025-07-14T22:38:23.823035848Z" level=warning msg="cleaning up after shim disconnected" id=b3af98e9fa7f30edf500eb25099479b2ce07af03f08dad9fce3a169dd5d8014d namespace=k8s.io Jul 14 22:38:23.823145 containerd[1471]: time="2025-07-14T22:38:23.823048232Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:38:24.704272 containerd[1471]: time="2025-07-14T22:38:24.704215345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 22:38:25.521705 kubelet[2604]: E0714 22:38:25.521635 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:27.521183 kubelet[2604]: E0714 22:38:27.521133 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:29.522387 kubelet[2604]: E0714 22:38:29.522257 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:31.086215 containerd[1471]: time="2025-07-14T22:38:31.086080766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:31.183885 containerd[1471]: time="2025-07-14T22:38:31.183768795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 14 22:38:31.253903 containerd[1471]: time="2025-07-14T22:38:31.253837290Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:31.392432 containerd[1471]: time="2025-07-14T22:38:31.392284504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:31.393263 containerd[1471]: time="2025-07-14T22:38:31.393225119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 6.688969787s" Jul 14 22:38:31.393344 containerd[1471]: time="2025-07-14T22:38:31.393263241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 14 22:38:31.522124 kubelet[2604]: E0714 22:38:31.522070 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:31.641224 containerd[1471]: time="2025-07-14T22:38:31.641166432Z" level=info msg="CreateContainer within sandbox \"0b046173b79d104a0ca4d15b14df846a26bf1ef3cc8b9534c502bc2402b4a901\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 22:38:32.359788 containerd[1471]: time="2025-07-14T22:38:32.359717118Z" level=info msg="CreateContainer within sandbox \"0b046173b79d104a0ca4d15b14df846a26bf1ef3cc8b9534c502bc2402b4a901\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"78f1cd0be7c66d76f5b2683a14b5662c9ab03dd87ca256a45bc5301c38c9c8d0\"" Jul 14 22:38:32.360342 containerd[1471]: time="2025-07-14T22:38:32.360318098Z" level=info msg="StartContainer for \"78f1cd0be7c66d76f5b2683a14b5662c9ab03dd87ca256a45bc5301c38c9c8d0\"" Jul 14 22:38:32.399755 systemd[1]: Started cri-containerd-78f1cd0be7c66d76f5b2683a14b5662c9ab03dd87ca256a45bc5301c38c9c8d0.scope - libcontainer container 78f1cd0be7c66d76f5b2683a14b5662c9ab03dd87ca256a45bc5301c38c9c8d0. Jul 14 22:38:32.833676 containerd[1471]: time="2025-07-14T22:38:32.833610016Z" level=info msg="StartContainer for \"78f1cd0be7c66d76f5b2683a14b5662c9ab03dd87ca256a45bc5301c38c9c8d0\" returns successfully" Jul 14 22:38:33.521806 kubelet[2604]: E0714 22:38:33.521741 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:35.272784 systemd[1]: cri-containerd-78f1cd0be7c66d76f5b2683a14b5662c9ab03dd87ca256a45bc5301c38c9c8d0.scope: Deactivated successfully. Jul 14 22:38:35.305031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78f1cd0be7c66d76f5b2683a14b5662c9ab03dd87ca256a45bc5301c38c9c8d0-rootfs.mount: Deactivated successfully. Jul 14 22:38:35.319696 containerd[1471]: time="2025-07-14T22:38:35.319621916Z" level=info msg="shim disconnected" id=78f1cd0be7c66d76f5b2683a14b5662c9ab03dd87ca256a45bc5301c38c9c8d0 namespace=k8s.io Jul 14 22:38:35.319696 containerd[1471]: time="2025-07-14T22:38:35.319687541Z" level=warning msg="cleaning up after shim disconnected" id=78f1cd0be7c66d76f5b2683a14b5662c9ab03dd87ca256a45bc5301c38c9c8d0 namespace=k8s.io Jul 14 22:38:35.319696 containerd[1471]: time="2025-07-14T22:38:35.319700726Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:38:35.335664 kubelet[2604]: I0714 22:38:35.335623 2604 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 14 22:38:35.528049 systemd[1]: Created slice kubepods-besteffort-pod12e54208_f5d7_4225_a878_cbfd7ce81981.slice - libcontainer container kubepods-besteffort-pod12e54208_f5d7_4225_a878_cbfd7ce81981.slice. Jul 14 22:38:35.530754 containerd[1471]: time="2025-07-14T22:38:35.530709291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnmm5,Uid:12e54208-f5d7-4225-a878-cbfd7ce81981,Namespace:calico-system,Attempt:0,}" Jul 14 22:38:35.844419 containerd[1471]: time="2025-07-14T22:38:35.844264157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 22:38:36.275921 kubelet[2604]: I0714 22:38:36.275871 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-whisker-ca-bundle\") pod \"whisker-845d86f57b-r96bf\" (UID: \"be1ccdfc-351c-44ad-8eeb-7552d4e4f518\") " pod="calico-system/whisker-845d86f57b-r96bf" Jul 14 22:38:36.275921 kubelet[2604]: I0714 22:38:36.275912 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnmv6\" (UniqueName: \"kubernetes.io/projected/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-kube-api-access-mnmv6\") pod \"whisker-845d86f57b-r96bf\" (UID: \"be1ccdfc-351c-44ad-8eeb-7552d4e4f518\") " pod="calico-system/whisker-845d86f57b-r96bf" Jul 14 22:38:36.276115 kubelet[2604]: I0714 22:38:36.275936 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-whisker-backend-key-pair\") pod \"whisker-845d86f57b-r96bf\" (UID: \"be1ccdfc-351c-44ad-8eeb-7552d4e4f518\") " pod="calico-system/whisker-845d86f57b-r96bf" Jul 14 22:38:36.280542 systemd[1]: Created slice kubepods-besteffort-podbe1ccdfc_351c_44ad_8eeb_7552d4e4f518.slice - libcontainer container kubepods-besteffort-podbe1ccdfc_351c_44ad_8eeb_7552d4e4f518.slice. Jul 14 22:38:36.475956 systemd[1]: Created slice kubepods-besteffort-pod61b0e57f_63f8_4046_911a_210a3070cdd1.slice - libcontainer container kubepods-besteffort-pod61b0e57f_63f8_4046_911a_210a3070cdd1.slice. Jul 14 22:38:36.476974 kubelet[2604]: I0714 22:38:36.476930 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/61b0e57f-63f8-4046-911a-210a3070cdd1-calico-apiserver-certs\") pod \"calico-apiserver-d69cdc74-2pbv4\" (UID: \"61b0e57f-63f8-4046-911a-210a3070cdd1\") " pod="calico-apiserver/calico-apiserver-d69cdc74-2pbv4" Jul 14 22:38:36.476974 kubelet[2604]: I0714 22:38:36.476969 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvtph\" (UniqueName: \"kubernetes.io/projected/61b0e57f-63f8-4046-911a-210a3070cdd1-kube-api-access-kvtph\") pod \"calico-apiserver-d69cdc74-2pbv4\" (UID: \"61b0e57f-63f8-4046-911a-210a3070cdd1\") " pod="calico-apiserver/calico-apiserver-d69cdc74-2pbv4" Jul 14 22:38:36.751778 systemd[1]: Created slice kubepods-burstable-pod29dad085_0fbc_4a72_8160_b942ebda8dbc.slice - libcontainer container kubepods-burstable-pod29dad085_0fbc_4a72_8160_b942ebda8dbc.slice. Jul 14 22:38:36.778599 kubelet[2604]: I0714 22:38:36.778388 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntcf7\" (UniqueName: \"kubernetes.io/projected/b8c6f0a5-a93f-42f8-a410-795cf33b659f-kube-api-access-ntcf7\") pod \"goldmane-768f4c5c69-4z895\" (UID: \"b8c6f0a5-a93f-42f8-a410-795cf33b659f\") " pod="calico-system/goldmane-768f4c5c69-4z895" Jul 14 22:38:36.779372 kubelet[2604]: I0714 22:38:36.779345 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b8c6f0a5-a93f-42f8-a410-795cf33b659f-goldmane-key-pair\") pod \"goldmane-768f4c5c69-4z895\" (UID: \"b8c6f0a5-a93f-42f8-a410-795cf33b659f\") " pod="calico-system/goldmane-768f4c5c69-4z895" Jul 14 22:38:36.781442 containerd[1471]: time="2025-07-14T22:38:36.779715804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d69cdc74-2pbv4,Uid:61b0e57f-63f8-4046-911a-210a3070cdd1,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:38:36.780060 systemd[1]: Created slice kubepods-burstable-pod23ede824_2f3c_4c4c_b760_db02257f0bab.slice - libcontainer container kubepods-burstable-pod23ede824_2f3c_4c4c_b760_db02257f0bab.slice. Jul 14 22:38:36.784503 kubelet[2604]: I0714 22:38:36.784197 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23ede824-2f3c-4c4c-b760-db02257f0bab-config-volume\") pod \"coredns-674b8bbfcf-9lnpr\" (UID: \"23ede824-2f3c-4c4c-b760-db02257f0bab\") " pod="kube-system/coredns-674b8bbfcf-9lnpr" Jul 14 22:38:36.784503 kubelet[2604]: I0714 22:38:36.784242 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbbhh\" (UniqueName: \"kubernetes.io/projected/23ede824-2f3c-4c4c-b760-db02257f0bab-kube-api-access-rbbhh\") pod \"coredns-674b8bbfcf-9lnpr\" (UID: \"23ede824-2f3c-4c4c-b760-db02257f0bab\") " pod="kube-system/coredns-674b8bbfcf-9lnpr" Jul 14 22:38:36.784503 kubelet[2604]: I0714 22:38:36.784275 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b8c6f0a5-a93f-42f8-a410-795cf33b659f-config\") pod \"goldmane-768f4c5c69-4z895\" (UID: \"b8c6f0a5-a93f-42f8-a410-795cf33b659f\") " pod="calico-system/goldmane-768f4c5c69-4z895" Jul 14 22:38:36.784503 kubelet[2604]: I0714 22:38:36.784296 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/121a6ca1-b03e-4bca-84d8-4cf70c6b267d-tigera-ca-bundle\") pod \"calico-kube-controllers-798dc4cdb-fh8g7\" (UID: \"121a6ca1-b03e-4bca-84d8-4cf70c6b267d\") " pod="calico-system/calico-kube-controllers-798dc4cdb-fh8g7" Jul 14 22:38:36.784503 kubelet[2604]: I0714 22:38:36.784333 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/29dad085-0fbc-4a72-8160-b942ebda8dbc-config-volume\") pod \"coredns-674b8bbfcf-zhgwx\" (UID: \"29dad085-0fbc-4a72-8160-b942ebda8dbc\") " pod="kube-system/coredns-674b8bbfcf-zhgwx" Jul 14 22:38:36.784751 kubelet[2604]: I0714 22:38:36.784352 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8c6f0a5-a93f-42f8-a410-795cf33b659f-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-4z895\" (UID: \"b8c6f0a5-a93f-42f8-a410-795cf33b659f\") " pod="calico-system/goldmane-768f4c5c69-4z895" Jul 14 22:38:36.784751 kubelet[2604]: I0714 22:38:36.784369 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7drck\" (UniqueName: \"kubernetes.io/projected/29dad085-0fbc-4a72-8160-b942ebda8dbc-kube-api-access-7drck\") pod \"coredns-674b8bbfcf-zhgwx\" (UID: \"29dad085-0fbc-4a72-8160-b942ebda8dbc\") " pod="kube-system/coredns-674b8bbfcf-zhgwx" Jul 14 22:38:36.784751 kubelet[2604]: I0714 22:38:36.784386 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-457zt\" (UniqueName: \"kubernetes.io/projected/121a6ca1-b03e-4bca-84d8-4cf70c6b267d-kube-api-access-457zt\") pod \"calico-kube-controllers-798dc4cdb-fh8g7\" (UID: \"121a6ca1-b03e-4bca-84d8-4cf70c6b267d\") " pod="calico-system/calico-kube-controllers-798dc4cdb-fh8g7" Jul 14 22:38:36.799155 systemd[1]: Created slice kubepods-besteffort-podb8c6f0a5_a93f_42f8_a410_795cf33b659f.slice - libcontainer container kubepods-besteffort-podb8c6f0a5_a93f_42f8_a410_795cf33b659f.slice. Jul 14 22:38:36.822249 systemd[1]: Created slice kubepods-besteffort-pod121a6ca1_b03e_4bca_84d8_4cf70c6b267d.slice - libcontainer container kubepods-besteffort-pod121a6ca1_b03e_4bca_84d8_4cf70c6b267d.slice. Jul 14 22:38:36.825436 systemd[1]: Created slice kubepods-besteffort-podeb8d9286_50a6_4899_a9a8_90e0bbd55a23.slice - libcontainer container kubepods-besteffort-podeb8d9286_50a6_4899_a9a8_90e0bbd55a23.slice. Jul 14 22:38:36.883857 containerd[1471]: time="2025-07-14T22:38:36.883793694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-845d86f57b-r96bf,Uid:be1ccdfc-351c-44ad-8eeb-7552d4e4f518,Namespace:calico-system,Attempt:0,}" Jul 14 22:38:36.884406 containerd[1471]: time="2025-07-14T22:38:36.884307108Z" level=error msg="Failed to destroy network for sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.887555 kubelet[2604]: I0714 22:38:36.884894 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eb8d9286-50a6-4899-a9a8-90e0bbd55a23-calico-apiserver-certs\") pod \"calico-apiserver-d69cdc74-djhwk\" (UID: \"eb8d9286-50a6-4899-a9a8-90e0bbd55a23\") " pod="calico-apiserver/calico-apiserver-d69cdc74-djhwk" Jul 14 22:38:36.887555 kubelet[2604]: I0714 22:38:36.884961 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4km8\" (UniqueName: \"kubernetes.io/projected/eb8d9286-50a6-4899-a9a8-90e0bbd55a23-kube-api-access-h4km8\") pod \"calico-apiserver-d69cdc74-djhwk\" (UID: \"eb8d9286-50a6-4899-a9a8-90e0bbd55a23\") " pod="calico-apiserver/calico-apiserver-d69cdc74-djhwk" Jul 14 22:38:36.898995 containerd[1471]: time="2025-07-14T22:38:36.898901666Z" level=error msg="encountered an error cleaning up failed sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.899150 containerd[1471]: time="2025-07-14T22:38:36.899112105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnmm5,Uid:12e54208-f5d7-4225-a878-cbfd7ce81981,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.899627 kubelet[2604]: E0714 22:38:36.899579 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.899722 kubelet[2604]: E0714 22:38:36.899682 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rnmm5" Jul 14 22:38:36.899755 kubelet[2604]: E0714 22:38:36.899737 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rnmm5" Jul 14 22:38:36.899960 kubelet[2604]: E0714 22:38:36.899913 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rnmm5_calico-system(12e54208-f5d7-4225-a878-cbfd7ce81981)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rnmm5_calico-system(12e54208-f5d7-4225-a878-cbfd7ce81981)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:36.963105 containerd[1471]: time="2025-07-14T22:38:36.963035106Z" level=error msg="Failed to destroy network for sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.964477 containerd[1471]: time="2025-07-14T22:38:36.964095196Z" level=error msg="encountered an error cleaning up failed sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.964477 containerd[1471]: time="2025-07-14T22:38:36.964181478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d69cdc74-2pbv4,Uid:61b0e57f-63f8-4046-911a-210a3070cdd1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.965003 kubelet[2604]: E0714 22:38:36.964600 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.965003 kubelet[2604]: E0714 22:38:36.964669 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d69cdc74-2pbv4" Jul 14 22:38:36.965003 kubelet[2604]: E0714 22:38:36.964694 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d69cdc74-2pbv4" Jul 14 22:38:36.965183 kubelet[2604]: E0714 22:38:36.964749 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d69cdc74-2pbv4_calico-apiserver(61b0e57f-63f8-4046-911a-210a3070cdd1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d69cdc74-2pbv4_calico-apiserver(61b0e57f-63f8-4046-911a-210a3070cdd1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d69cdc74-2pbv4" podUID="61b0e57f-63f8-4046-911a-210a3070cdd1" Jul 14 22:38:36.998569 containerd[1471]: time="2025-07-14T22:38:36.998051747Z" level=error msg="Failed to destroy network for sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.998569 containerd[1471]: time="2025-07-14T22:38:36.998438600Z" level=error msg="encountered an error cleaning up failed sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.998569 containerd[1471]: time="2025-07-14T22:38:36.998528741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-845d86f57b-r96bf,Uid:be1ccdfc-351c-44ad-8eeb-7552d4e4f518,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.999430 kubelet[2604]: E0714 22:38:36.998950 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:36.999430 kubelet[2604]: E0714 22:38:36.999004 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-845d86f57b-r96bf" Jul 14 22:38:36.999430 kubelet[2604]: E0714 22:38:36.999025 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-845d86f57b-r96bf" Jul 14 22:38:36.999612 kubelet[2604]: E0714 22:38:36.999074 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-845d86f57b-r96bf_calico-system(be1ccdfc-351c-44ad-8eeb-7552d4e4f518)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-845d86f57b-r96bf_calico-system(be1ccdfc-351c-44ad-8eeb-7552d4e4f518)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-845d86f57b-r96bf" podUID="be1ccdfc-351c-44ad-8eeb-7552d4e4f518" Jul 14 22:38:37.060698 kubelet[2604]: E0714 22:38:37.060637 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:37.061362 containerd[1471]: time="2025-07-14T22:38:37.061308292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zhgwx,Uid:29dad085-0fbc-4a72-8160-b942ebda8dbc,Namespace:kube-system,Attempt:0,}" Jul 14 22:38:37.092492 kubelet[2604]: E0714 22:38:37.092413 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:37.093180 containerd[1471]: time="2025-07-14T22:38:37.093138200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9lnpr,Uid:23ede824-2f3c-4c4c-b760-db02257f0bab,Namespace:kube-system,Attempt:0,}" Jul 14 22:38:37.107839 containerd[1471]: time="2025-07-14T22:38:37.107771256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4z895,Uid:b8c6f0a5-a93f-42f8-a410-795cf33b659f,Namespace:calico-system,Attempt:0,}" Jul 14 22:38:37.138358 containerd[1471]: time="2025-07-14T22:38:37.138305289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798dc4cdb-fh8g7,Uid:121a6ca1-b03e-4bca-84d8-4cf70c6b267d,Namespace:calico-system,Attempt:0,}" Jul 14 22:38:37.140265 containerd[1471]: time="2025-07-14T22:38:37.138590019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d69cdc74-djhwk,Uid:eb8d9286-50a6-4899-a9a8-90e0bbd55a23,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:38:37.143889 containerd[1471]: time="2025-07-14T22:38:37.143826009Z" level=error msg="Failed to destroy network for sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.144428 containerd[1471]: time="2025-07-14T22:38:37.144394977Z" level=error msg="encountered an error cleaning up failed sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.144511 containerd[1471]: time="2025-07-14T22:38:37.144478405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zhgwx,Uid:29dad085-0fbc-4a72-8160-b942ebda8dbc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.144814 kubelet[2604]: E0714 22:38:37.144733 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.144814 kubelet[2604]: E0714 22:38:37.144806 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zhgwx" Jul 14 22:38:37.144926 kubelet[2604]: E0714 22:38:37.144830 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zhgwx" Jul 14 22:38:37.144926 kubelet[2604]: E0714 22:38:37.144896 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zhgwx_kube-system(29dad085-0fbc-4a72-8160-b942ebda8dbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zhgwx_kube-system(29dad085-0fbc-4a72-8160-b942ebda8dbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zhgwx" podUID="29dad085-0fbc-4a72-8160-b942ebda8dbc" Jul 14 22:38:37.216001 containerd[1471]: time="2025-07-14T22:38:37.215738883Z" level=error msg="Failed to destroy network for sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.216260 containerd[1471]: time="2025-07-14T22:38:37.216222619Z" level=error msg="encountered an error cleaning up failed sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.216307 containerd[1471]: time="2025-07-14T22:38:37.216282283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4z895,Uid:b8c6f0a5-a93f-42f8-a410-795cf33b659f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.217119 kubelet[2604]: E0714 22:38:37.216579 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.217119 kubelet[2604]: E0714 22:38:37.216641 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-4z895" Jul 14 22:38:37.217119 kubelet[2604]: E0714 22:38:37.216662 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-4z895" Jul 14 22:38:37.217323 kubelet[2604]: E0714 22:38:37.216717 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-4z895_calico-system(b8c6f0a5-a93f-42f8-a410-795cf33b659f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-4z895_calico-system(b8c6f0a5-a93f-42f8-a410-795cf33b659f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-4z895" podUID="b8c6f0a5-a93f-42f8-a410-795cf33b659f" Jul 14 22:38:37.233553 containerd[1471]: time="2025-07-14T22:38:37.233483104Z" level=error msg="Failed to destroy network for sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.234018 containerd[1471]: time="2025-07-14T22:38:37.233982901Z" level=error msg="encountered an error cleaning up failed sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.234085 containerd[1471]: time="2025-07-14T22:38:37.234063833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9lnpr,Uid:23ede824-2f3c-4c4c-b760-db02257f0bab,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.234777 kubelet[2604]: E0714 22:38:37.234299 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.234777 kubelet[2604]: E0714 22:38:37.234371 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9lnpr" Jul 14 22:38:37.234777 kubelet[2604]: E0714 22:38:37.234424 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-9lnpr" Jul 14 22:38:37.234947 kubelet[2604]: E0714 22:38:37.234492 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-9lnpr_kube-system(23ede824-2f3c-4c4c-b760-db02257f0bab)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-9lnpr_kube-system(23ede824-2f3c-4c4c-b760-db02257f0bab)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9lnpr" podUID="23ede824-2f3c-4c4c-b760-db02257f0bab" Jul 14 22:38:37.250940 containerd[1471]: time="2025-07-14T22:38:37.250843927Z" level=error msg="Failed to destroy network for sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.251274 containerd[1471]: time="2025-07-14T22:38:37.251244045Z" level=error msg="encountered an error cleaning up failed sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.251326 containerd[1471]: time="2025-07-14T22:38:37.251290944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d69cdc74-djhwk,Uid:eb8d9286-50a6-4899-a9a8-90e0bbd55a23,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.251599 kubelet[2604]: E0714 22:38:37.251554 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.251643 kubelet[2604]: E0714 22:38:37.251618 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d69cdc74-djhwk" Jul 14 22:38:37.251643 kubelet[2604]: E0714 22:38:37.251638 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d69cdc74-djhwk" Jul 14 22:38:37.251702 kubelet[2604]: E0714 22:38:37.251682 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d69cdc74-djhwk_calico-apiserver(eb8d9286-50a6-4899-a9a8-90e0bbd55a23)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d69cdc74-djhwk_calico-apiserver(eb8d9286-50a6-4899-a9a8-90e0bbd55a23)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d69cdc74-djhwk" podUID="eb8d9286-50a6-4899-a9a8-90e0bbd55a23" Jul 14 22:38:37.260865 containerd[1471]: time="2025-07-14T22:38:37.260757009Z" level=error msg="Failed to destroy network for sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.261180 containerd[1471]: time="2025-07-14T22:38:37.261149814Z" level=error msg="encountered an error cleaning up failed sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.261227 containerd[1471]: time="2025-07-14T22:38:37.261194949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798dc4cdb-fh8g7,Uid:121a6ca1-b03e-4bca-84d8-4cf70c6b267d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.261390 kubelet[2604]: E0714 22:38:37.261348 2604 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.261390 kubelet[2604]: E0714 22:38:37.261391 2604 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-798dc4cdb-fh8g7" Jul 14 22:38:37.261390 kubelet[2604]: E0714 22:38:37.261411 2604 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-798dc4cdb-fh8g7" Jul 14 22:38:37.261608 kubelet[2604]: E0714 22:38:37.261463 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-798dc4cdb-fh8g7_calico-system(121a6ca1-b03e-4bca-84d8-4cf70c6b267d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-798dc4cdb-fh8g7_calico-system(121a6ca1-b03e-4bca-84d8-4cf70c6b267d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-798dc4cdb-fh8g7" podUID="121a6ca1-b03e-4bca-84d8-4cf70c6b267d" Jul 14 22:38:37.391284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e-shm.mount: Deactivated successfully. Jul 14 22:38:37.391399 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5-shm.mount: Deactivated successfully. Jul 14 22:38:37.849488 kubelet[2604]: I0714 22:38:37.849428 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:38:37.850622 kubelet[2604]: I0714 22:38:37.850595 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:38:37.852393 kubelet[2604]: I0714 22:38:37.852363 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:38:37.853255 kubelet[2604]: I0714 22:38:37.853239 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:38:37.914331 containerd[1471]: time="2025-07-14T22:38:37.914177343Z" level=info msg="StopPodSandbox for \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\"" Jul 14 22:38:37.914807 containerd[1471]: time="2025-07-14T22:38:37.914679133Z" level=info msg="StopPodSandbox for \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\"" Jul 14 22:38:37.914971 containerd[1471]: time="2025-07-14T22:38:37.914924378Z" level=info msg="StopPodSandbox for \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\"" Jul 14 22:38:37.915107 containerd[1471]: time="2025-07-14T22:38:37.915033475Z" level=info msg="StopPodSandbox for \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\"" Jul 14 22:38:37.916148 containerd[1471]: time="2025-07-14T22:38:37.916050702Z" level=info msg="Ensure that sandbox 53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9 in task-service has been cleanup successfully" Jul 14 22:38:37.916148 containerd[1471]: time="2025-07-14T22:38:37.916071221Z" level=info msg="Ensure that sandbox 8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a in task-service has been cleanup successfully" Jul 14 22:38:37.916490 containerd[1471]: time="2025-07-14T22:38:37.916057375Z" level=info msg="Ensure that sandbox bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e in task-service has been cleanup successfully" Jul 14 22:38:37.916540 kubelet[2604]: I0714 22:38:37.916498 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:38:37.917834 containerd[1471]: time="2025-07-14T22:38:37.917025920Z" level=info msg="StopPodSandbox for \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\"" Jul 14 22:38:37.917834 containerd[1471]: time="2025-07-14T22:38:37.917169002Z" level=info msg="Ensure that sandbox fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5 in task-service has been cleanup successfully" Jul 14 22:38:37.925948 containerd[1471]: time="2025-07-14T22:38:37.916065520Z" level=info msg="Ensure that sandbox 91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709 in task-service has been cleanup successfully" Jul 14 22:38:37.926916 kubelet[2604]: I0714 22:38:37.926891 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:38:37.927667 containerd[1471]: time="2025-07-14T22:38:37.927628880Z" level=info msg="StopPodSandbox for \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\"" Jul 14 22:38:37.928284 containerd[1471]: time="2025-07-14T22:38:37.928265156Z" level=info msg="Ensure that sandbox 043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc in task-service has been cleanup successfully" Jul 14 22:38:37.930095 kubelet[2604]: I0714 22:38:37.930070 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:38:37.930695 containerd[1471]: time="2025-07-14T22:38:37.930661536Z" level=info msg="StopPodSandbox for \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\"" Jul 14 22:38:37.932256 containerd[1471]: time="2025-07-14T22:38:37.932223345Z" level=info msg="Ensure that sandbox abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59 in task-service has been cleanup successfully" Jul 14 22:38:37.938191 kubelet[2604]: I0714 22:38:37.938149 2604 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:38:37.945178 containerd[1471]: time="2025-07-14T22:38:37.944940831Z" level=info msg="StopPodSandbox for \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\"" Jul 14 22:38:37.954886 containerd[1471]: time="2025-07-14T22:38:37.954800762Z" level=info msg="Ensure that sandbox e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52 in task-service has been cleanup successfully" Jul 14 22:38:37.999611 containerd[1471]: time="2025-07-14T22:38:37.999535783Z" level=error msg="StopPodSandbox for \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\" failed" error="failed to destroy network for sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:37.999858 kubelet[2604]: E0714 22:38:37.999799 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:38:37.999911 kubelet[2604]: E0714 22:38:37.999875 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9"} Jul 14 22:38:38.000022 kubelet[2604]: E0714 22:38:37.999930 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b8c6f0a5-a93f-42f8-a410-795cf33b659f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:38:38.000119 kubelet[2604]: E0714 22:38:38.000030 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b8c6f0a5-a93f-42f8-a410-795cf33b659f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-4z895" podUID="b8c6f0a5-a93f-42f8-a410-795cf33b659f" Jul 14 22:38:38.008966 containerd[1471]: time="2025-07-14T22:38:38.008908528Z" level=error msg="StopPodSandbox for \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\" failed" error="failed to destroy network for sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:38.009544 kubelet[2604]: E0714 22:38:38.009264 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:38:38.009544 kubelet[2604]: E0714 22:38:38.009328 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5"} Jul 14 22:38:38.009544 kubelet[2604]: E0714 22:38:38.009351 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12e54208-f5d7-4225-a878-cbfd7ce81981\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:38:38.009544 kubelet[2604]: E0714 22:38:38.009371 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12e54208-f5d7-4225-a878-cbfd7ce81981\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rnmm5" podUID="12e54208-f5d7-4225-a878-cbfd7ce81981" Jul 14 22:38:38.009917 containerd[1471]: time="2025-07-14T22:38:38.009882283Z" level=error msg="StopPodSandbox for \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\" failed" error="failed to destroy network for sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:38.010066 kubelet[2604]: E0714 22:38:38.010029 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:38:38.010066 kubelet[2604]: E0714 22:38:38.010062 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709"} Jul 14 22:38:38.010136 kubelet[2604]: E0714 22:38:38.010083 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb8d9286-50a6-4899-a9a8-90e0bbd55a23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:38:38.010136 kubelet[2604]: E0714 22:38:38.010101 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb8d9286-50a6-4899-a9a8-90e0bbd55a23\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d69cdc74-djhwk" podUID="eb8d9286-50a6-4899-a9a8-90e0bbd55a23" Jul 14 22:38:38.017718 containerd[1471]: time="2025-07-14T22:38:38.017647482Z" level=error msg="StopPodSandbox for \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\" failed" error="failed to destroy network for sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:38.017980 kubelet[2604]: E0714 22:38:38.017922 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:38:38.017980 kubelet[2604]: E0714 22:38:38.017963 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a"} Jul 14 22:38:38.018074 kubelet[2604]: E0714 22:38:38.017984 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"23ede824-2f3c-4c4c-b760-db02257f0bab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:38:38.018074 kubelet[2604]: E0714 22:38:38.018004 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"23ede824-2f3c-4c4c-b760-db02257f0bab\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-9lnpr" podUID="23ede824-2f3c-4c4c-b760-db02257f0bab" Jul 14 22:38:38.018437 containerd[1471]: time="2025-07-14T22:38:38.018389378Z" level=error msg="StopPodSandbox for \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\" failed" error="failed to destroy network for sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:38.018764 kubelet[2604]: E0714 22:38:38.018593 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:38:38.018764 kubelet[2604]: E0714 22:38:38.018650 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e"} Jul 14 22:38:38.018764 kubelet[2604]: E0714 22:38:38.018681 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"61b0e57f-63f8-4046-911a-210a3070cdd1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:38:38.018764 kubelet[2604]: E0714 22:38:38.018705 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"61b0e57f-63f8-4046-911a-210a3070cdd1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d69cdc74-2pbv4" podUID="61b0e57f-63f8-4046-911a-210a3070cdd1" Jul 14 22:38:38.024643 containerd[1471]: time="2025-07-14T22:38:38.024586086Z" level=error msg="StopPodSandbox for \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\" failed" error="failed to destroy network for sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:38.024930 kubelet[2604]: E0714 22:38:38.024891 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:38:38.025012 kubelet[2604]: E0714 22:38:38.024967 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc"} Jul 14 22:38:38.025069 kubelet[2604]: E0714 22:38:38.025010 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"29dad085-0fbc-4a72-8160-b942ebda8dbc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:38:38.025157 kubelet[2604]: E0714 22:38:38.025071 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"29dad085-0fbc-4a72-8160-b942ebda8dbc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zhgwx" podUID="29dad085-0fbc-4a72-8160-b942ebda8dbc" Jul 14 22:38:38.026436 containerd[1471]: time="2025-07-14T22:38:38.026384213Z" level=error msg="StopPodSandbox for \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\" failed" error="failed to destroy network for sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:38.026589 kubelet[2604]: E0714 22:38:38.026545 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:38:38.026589 kubelet[2604]: E0714 22:38:38.026579 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59"} Jul 14 22:38:38.026682 kubelet[2604]: E0714 22:38:38.026609 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"121a6ca1-b03e-4bca-84d8-4cf70c6b267d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:38:38.026682 kubelet[2604]: E0714 22:38:38.026633 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"121a6ca1-b03e-4bca-84d8-4cf70c6b267d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-798dc4cdb-fh8g7" podUID="121a6ca1-b03e-4bca-84d8-4cf70c6b267d" Jul 14 22:38:38.034922 containerd[1471]: time="2025-07-14T22:38:38.034865088Z" level=error msg="StopPodSandbox for \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\" failed" error="failed to destroy network for sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:38:38.035175 kubelet[2604]: E0714 22:38:38.035132 2604 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:38:38.035236 kubelet[2604]: E0714 22:38:38.035191 2604 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52"} Jul 14 22:38:38.035236 kubelet[2604]: E0714 22:38:38.035226 2604 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be1ccdfc-351c-44ad-8eeb-7552d4e4f518\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:38:38.035327 kubelet[2604]: E0714 22:38:38.035259 2604 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be1ccdfc-351c-44ad-8eeb-7552d4e4f518\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-845d86f57b-r96bf" podUID="be1ccdfc-351c-44ad-8eeb-7552d4e4f518" Jul 14 22:38:47.046521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327696584.mount: Deactivated successfully. Jul 14 22:38:48.847770 containerd[1471]: time="2025-07-14T22:38:48.847693303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:48.849657 containerd[1471]: time="2025-07-14T22:38:48.849616459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 14 22:38:48.854271 containerd[1471]: time="2025-07-14T22:38:48.854230104Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:48.856643 containerd[1471]: time="2025-07-14T22:38:48.856613721Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:48.857206 containerd[1471]: time="2025-07-14T22:38:48.857180863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 13.012872262s" Jul 14 22:38:48.857256 containerd[1471]: time="2025-07-14T22:38:48.857210018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 14 22:38:48.870459 containerd[1471]: time="2025-07-14T22:38:48.870393208Z" level=info msg="CreateContainer within sandbox \"0b046173b79d104a0ca4d15b14df846a26bf1ef3cc8b9534c502bc2402b4a901\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 22:38:48.900032 containerd[1471]: time="2025-07-14T22:38:48.899968114Z" level=info msg="CreateContainer within sandbox \"0b046173b79d104a0ca4d15b14df846a26bf1ef3cc8b9534c502bc2402b4a901\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"963008f8a928b01b3ea039f13c6255aa7887dc1435dc9c1eb18d1546b62af350\"" Jul 14 22:38:48.900959 containerd[1471]: time="2025-07-14T22:38:48.900905125Z" level=info msg="StartContainer for \"963008f8a928b01b3ea039f13c6255aa7887dc1435dc9c1eb18d1546b62af350\"" Jul 14 22:38:48.955656 systemd[1]: Started cri-containerd-963008f8a928b01b3ea039f13c6255aa7887dc1435dc9c1eb18d1546b62af350.scope - libcontainer container 963008f8a928b01b3ea039f13c6255aa7887dc1435dc9c1eb18d1546b62af350. Jul 14 22:38:48.999659 containerd[1471]: time="2025-07-14T22:38:48.999595278Z" level=info msg="StartContainer for \"963008f8a928b01b3ea039f13c6255aa7887dc1435dc9c1eb18d1546b62af350\" returns successfully" Jul 14 22:38:49.084885 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 22:38:49.086250 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 22:38:49.177318 containerd[1471]: time="2025-07-14T22:38:49.177158361Z" level=info msg="StopPodSandbox for \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\"" Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.262 [INFO][3977] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.262 [INFO][3977] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" iface="eth0" netns="/var/run/netns/cni-aab70dca-9588-eb25-5125-25d69baa7b7b" Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.262 [INFO][3977] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" iface="eth0" netns="/var/run/netns/cni-aab70dca-9588-eb25-5125-25d69baa7b7b" Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.263 [INFO][3977] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" iface="eth0" netns="/var/run/netns/cni-aab70dca-9588-eb25-5125-25d69baa7b7b" Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.263 [INFO][3977] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.263 [INFO][3977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.324 [INFO][3985] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" HandleID="k8s-pod-network.e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Workload="localhost-k8s-whisker--845d86f57b--r96bf-eth0" Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.325 [INFO][3985] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.325 [INFO][3985] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.332 [WARNING][3985] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" HandleID="k8s-pod-network.e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Workload="localhost-k8s-whisker--845d86f57b--r96bf-eth0" Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.332 [INFO][3985] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" HandleID="k8s-pod-network.e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Workload="localhost-k8s-whisker--845d86f57b--r96bf-eth0" Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.333 [INFO][3985] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:49.341349 containerd[1471]: 2025-07-14 22:38:49.337 [INFO][3977] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:38:49.341887 containerd[1471]: time="2025-07-14T22:38:49.341544849Z" level=info msg="TearDown network for sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\" successfully" Jul 14 22:38:49.341887 containerd[1471]: time="2025-07-14T22:38:49.341580437Z" level=info msg="StopPodSandbox for \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\" returns successfully" Jul 14 22:38:49.364783 kubelet[2604]: I0714 22:38:49.364728 2604 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-whisker-backend-key-pair\") pod \"be1ccdfc-351c-44ad-8eeb-7552d4e4f518\" (UID: \"be1ccdfc-351c-44ad-8eeb-7552d4e4f518\") " Jul 14 22:38:49.364783 kubelet[2604]: I0714 22:38:49.364781 2604 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnmv6\" (UniqueName: \"kubernetes.io/projected/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-kube-api-access-mnmv6\") pod \"be1ccdfc-351c-44ad-8eeb-7552d4e4f518\" (UID: \"be1ccdfc-351c-44ad-8eeb-7552d4e4f518\") " Jul 14 22:38:49.365343 kubelet[2604]: I0714 22:38:49.364825 2604 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-whisker-ca-bundle\") pod \"be1ccdfc-351c-44ad-8eeb-7552d4e4f518\" (UID: \"be1ccdfc-351c-44ad-8eeb-7552d4e4f518\") " Jul 14 22:38:49.366805 kubelet[2604]: I0714 22:38:49.366739 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "be1ccdfc-351c-44ad-8eeb-7552d4e4f518" (UID: "be1ccdfc-351c-44ad-8eeb-7552d4e4f518"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 14 22:38:49.370795 kubelet[2604]: I0714 22:38:49.370753 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "be1ccdfc-351c-44ad-8eeb-7552d4e4f518" (UID: "be1ccdfc-351c-44ad-8eeb-7552d4e4f518"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 14 22:38:49.370932 kubelet[2604]: I0714 22:38:49.370805 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-kube-api-access-mnmv6" (OuterVolumeSpecName: "kube-api-access-mnmv6") pod "be1ccdfc-351c-44ad-8eeb-7552d4e4f518" (UID: "be1ccdfc-351c-44ad-8eeb-7552d4e4f518"). InnerVolumeSpecName "kube-api-access-mnmv6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 14 22:38:49.466028 kubelet[2604]: I0714 22:38:49.465893 2604 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 22:38:49.466028 kubelet[2604]: I0714 22:38:49.465947 2604 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mnmv6\" (UniqueName: \"kubernetes.io/projected/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-kube-api-access-mnmv6\") on node \"localhost\" DevicePath \"\"" Jul 14 22:38:49.466028 kubelet[2604]: I0714 22:38:49.465963 2604 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be1ccdfc-351c-44ad-8eeb-7552d4e4f518-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 22:38:49.521927 containerd[1471]: time="2025-07-14T22:38:49.521886144Z" level=info msg="StopPodSandbox for \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\"" Jul 14 22:38:49.522009 containerd[1471]: time="2025-07-14T22:38:49.521954533Z" level=info msg="StopPodSandbox for \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\"" Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.569 [INFO][4028] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.570 [INFO][4028] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" iface="eth0" netns="/var/run/netns/cni-73d7dc9b-cea5-d1ea-c015-a79c613777cb" Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.570 [INFO][4028] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" iface="eth0" netns="/var/run/netns/cni-73d7dc9b-cea5-d1ea-c015-a79c613777cb" Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.570 [INFO][4028] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" iface="eth0" netns="/var/run/netns/cni-73d7dc9b-cea5-d1ea-c015-a79c613777cb" Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.570 [INFO][4028] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.570 [INFO][4028] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.593 [INFO][4044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" HandleID="k8s-pod-network.91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.593 [INFO][4044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.593 [INFO][4044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.599 [WARNING][4044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" HandleID="k8s-pod-network.91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.599 [INFO][4044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" HandleID="k8s-pod-network.91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.600 [INFO][4044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:49.605794 containerd[1471]: 2025-07-14 22:38:49.602 [INFO][4028] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:38:49.606776 containerd[1471]: time="2025-07-14T22:38:49.605979691Z" level=info msg="TearDown network for sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\" successfully" Jul 14 22:38:49.606776 containerd[1471]: time="2025-07-14T22:38:49.606012193Z" level=info msg="StopPodSandbox for \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\" returns successfully" Jul 14 22:38:49.606872 containerd[1471]: time="2025-07-14T22:38:49.606832634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d69cdc74-djhwk,Uid:eb8d9286-50a6-4899-a9a8-90e0bbd55a23,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.570 [INFO][4027] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.570 [INFO][4027] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" iface="eth0" netns="/var/run/netns/cni-e4643569-c839-671f-ec9f-bc47e70be7e6" Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.571 [INFO][4027] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" iface="eth0" netns="/var/run/netns/cni-e4643569-c839-671f-ec9f-bc47e70be7e6" Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.571 [INFO][4027] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" iface="eth0" netns="/var/run/netns/cni-e4643569-c839-671f-ec9f-bc47e70be7e6" Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.571 [INFO][4027] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.571 [INFO][4027] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.594 [INFO][4046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" HandleID="k8s-pod-network.8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.594 [INFO][4046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.600 [INFO][4046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.605 [WARNING][4046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" HandleID="k8s-pod-network.8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.605 [INFO][4046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" HandleID="k8s-pod-network.8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.606 [INFO][4046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:49.612297 containerd[1471]: 2025-07-14 22:38:49.608 [INFO][4027] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:38:49.612907 containerd[1471]: time="2025-07-14T22:38:49.612443572Z" level=info msg="TearDown network for sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\" successfully" Jul 14 22:38:49.612907 containerd[1471]: time="2025-07-14T22:38:49.612484470Z" level=info msg="StopPodSandbox for \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\" returns successfully" Jul 14 22:38:49.612988 kubelet[2604]: E0714 22:38:49.612764 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:49.613475 containerd[1471]: time="2025-07-14T22:38:49.613402396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9lnpr,Uid:23ede824-2f3c-4c4c-b760-db02257f0bab,Namespace:kube-system,Attempt:1,}" Jul 14 22:38:49.742205 systemd-networkd[1403]: calidab2dcf2b55: Link UP Jul 14 22:38:49.742441 systemd-networkd[1403]: calidab2dcf2b55: Gained carrier Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.652 [INFO][4060] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.666 [INFO][4060] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0 calico-apiserver-d69cdc74- calico-apiserver eb8d9286-50a6-4899-a9a8-90e0bbd55a23 1034 0 2025-07-14 22:38:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d69cdc74 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d69cdc74-djhwk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidab2dcf2b55 [] [] }} ContainerID="121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-djhwk" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--djhwk-" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.666 [INFO][4060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-djhwk" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.696 [INFO][4087] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" HandleID="k8s-pod-network.121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.696 [INFO][4087] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" HandleID="k8s-pod-network.121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000345600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d69cdc74-djhwk", "timestamp":"2025-07-14 22:38:49.696229899 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.696 [INFO][4087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.696 [INFO][4087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.696 [INFO][4087] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.703 [INFO][4087] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" host="localhost" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.711 [INFO][4087] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.716 [INFO][4087] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.717 [INFO][4087] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.720 [INFO][4087] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.720 [INFO][4087] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" host="localhost" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.722 [INFO][4087] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.726 [INFO][4087] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" host="localhost" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.731 [INFO][4087] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" host="localhost" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.731 [INFO][4087] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" host="localhost" Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.731 [INFO][4087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:49.762680 containerd[1471]: 2025-07-14 22:38:49.731 [INFO][4087] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" HandleID="k8s-pod-network.121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:38:49.763312 containerd[1471]: 2025-07-14 22:38:49.734 [INFO][4060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-djhwk" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0", GenerateName:"calico-apiserver-d69cdc74-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb8d9286-50a6-4899-a9a8-90e0bbd55a23", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d69cdc74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d69cdc74-djhwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidab2dcf2b55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:49.763312 containerd[1471]: 2025-07-14 22:38:49.734 [INFO][4060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-djhwk" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:38:49.763312 containerd[1471]: 2025-07-14 22:38:49.734 [INFO][4060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidab2dcf2b55 ContainerID="121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-djhwk" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:38:49.763312 containerd[1471]: 2025-07-14 22:38:49.742 [INFO][4060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-djhwk" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:38:49.763312 containerd[1471]: 2025-07-14 22:38:49.744 [INFO][4060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-djhwk" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0", GenerateName:"calico-apiserver-d69cdc74-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb8d9286-50a6-4899-a9a8-90e0bbd55a23", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d69cdc74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d", Pod:"calico-apiserver-d69cdc74-djhwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidab2dcf2b55", MAC:"a2:3e:a1:73:fc:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:49.763312 containerd[1471]: 2025-07-14 22:38:49.757 [INFO][4060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-djhwk" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:38:49.796214 containerd[1471]: time="2025-07-14T22:38:49.796081540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:38:49.796214 containerd[1471]: time="2025-07-14T22:38:49.796147295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:38:49.796214 containerd[1471]: time="2025-07-14T22:38:49.796158606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:49.796422 containerd[1471]: time="2025-07-14T22:38:49.796255800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:49.823483 systemd[1]: Started cri-containerd-121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d.scope - libcontainer container 121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d. Jul 14 22:38:49.848538 systemd-networkd[1403]: caliab2774d67e4: Link UP Jul 14 22:38:49.848839 systemd-networkd[1403]: caliab2774d67e4: Gained carrier Jul 14 22:38:49.850108 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:38:49.871646 systemd[1]: run-netns-cni\x2d73d7dc9b\x2dcea5\x2dd1ea\x2dc015\x2da79c613777cb.mount: Deactivated successfully. Jul 14 22:38:49.871777 systemd[1]: run-netns-cni\x2de4643569\x2dc839\x2d671f\x2dec9f\x2dbc47e70be7e6.mount: Deactivated successfully. Jul 14 22:38:49.871880 systemd[1]: run-netns-cni\x2daab70dca\x2d9588\x2deb25\x2d5125\x2d25d69baa7b7b.mount: Deactivated successfully. Jul 14 22:38:49.871957 systemd[1]: var-lib-kubelet-pods-be1ccdfc\x2d351c\x2d44ad\x2d8eeb\x2d7552d4e4f518-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmnmv6.mount: Deactivated successfully. Jul 14 22:38:49.872034 systemd[1]: var-lib-kubelet-pods-be1ccdfc\x2d351c\x2d44ad\x2d8eeb\x2d7552d4e4f518-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.665 [INFO][4071] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.678 [INFO][4071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0 coredns-674b8bbfcf- kube-system 23ede824-2f3c-4c4c-b760-db02257f0bab 1033 0 2025-07-14 22:37:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-9lnpr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliab2774d67e4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" Namespace="kube-system" Pod="coredns-674b8bbfcf-9lnpr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9lnpr-" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.678 [INFO][4071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" Namespace="kube-system" Pod="coredns-674b8bbfcf-9lnpr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.706 [INFO][4094] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" HandleID="k8s-pod-network.86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.706 [INFO][4094] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" HandleID="k8s-pod-network.86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f720), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-9lnpr", "timestamp":"2025-07-14 22:38:49.706091675 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.706 [INFO][4094] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.731 [INFO][4094] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.732 [INFO][4094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.805 [INFO][4094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" host="localhost" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.812 [INFO][4094] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.817 [INFO][4094] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.819 [INFO][4094] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.822 [INFO][4094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.822 [INFO][4094] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" host="localhost" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.823 [INFO][4094] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.827 [INFO][4094] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" host="localhost" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.836 [INFO][4094] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" host="localhost" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.836 [INFO][4094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" host="localhost" Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.837 [INFO][4094] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:49.874095 containerd[1471]: 2025-07-14 22:38:49.837 [INFO][4094] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" HandleID="k8s-pod-network.86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:38:49.875345 containerd[1471]: 2025-07-14 22:38:49.840 [INFO][4071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" Namespace="kube-system" Pod="coredns-674b8bbfcf-9lnpr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"23ede824-2f3c-4c4c-b760-db02257f0bab", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-9lnpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab2774d67e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:49.875345 containerd[1471]: 2025-07-14 22:38:49.841 [INFO][4071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" Namespace="kube-system" Pod="coredns-674b8bbfcf-9lnpr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:38:49.875345 containerd[1471]: 2025-07-14 22:38:49.841 [INFO][4071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab2774d67e4 ContainerID="86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" Namespace="kube-system" Pod="coredns-674b8bbfcf-9lnpr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:38:49.875345 containerd[1471]: 2025-07-14 22:38:49.848 [INFO][4071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" Namespace="kube-system" Pod="coredns-674b8bbfcf-9lnpr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:38:49.875345 containerd[1471]: 2025-07-14 22:38:49.849 [INFO][4071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" Namespace="kube-system" Pod="coredns-674b8bbfcf-9lnpr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"23ede824-2f3c-4c4c-b760-db02257f0bab", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f", Pod:"coredns-674b8bbfcf-9lnpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab2774d67e4", MAC:"a2:e4:70:3e:64:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:49.875345 containerd[1471]: 2025-07-14 22:38:49.866 [INFO][4071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f" Namespace="kube-system" Pod="coredns-674b8bbfcf-9lnpr" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:38:49.885472 containerd[1471]: time="2025-07-14T22:38:49.885403794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d69cdc74-djhwk,Uid:eb8d9286-50a6-4899-a9a8-90e0bbd55a23,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d\"" Jul 14 22:38:49.887015 containerd[1471]: time="2025-07-14T22:38:49.886971999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:38:49.976986 systemd[1]: Removed slice kubepods-besteffort-podbe1ccdfc_351c_44ad_8eeb_7552d4e4f518.slice - libcontainer container kubepods-besteffort-podbe1ccdfc_351c_44ad_8eeb_7552d4e4f518.slice. Jul 14 22:38:49.985994 kubelet[2604]: I0714 22:38:49.985408 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-np8k4" podStartSLOduration=2.11630308 podStartE2EDuration="42.985391091s" podCreationTimestamp="2025-07-14 22:38:07 +0000 UTC" firstStartedPulling="2025-07-14 22:38:07.988682718 +0000 UTC m=+27.550987354" lastFinishedPulling="2025-07-14 22:38:48.857770729 +0000 UTC m=+68.420075365" observedRunningTime="2025-07-14 22:38:49.984027934 +0000 UTC m=+69.546332570" watchObservedRunningTime="2025-07-14 22:38:49.985391091 +0000 UTC m=+69.547695728" Jul 14 22:38:49.990640 containerd[1471]: time="2025-07-14T22:38:49.990321995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:38:49.990640 containerd[1471]: time="2025-07-14T22:38:49.990388972Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:38:49.990640 containerd[1471]: time="2025-07-14T22:38:49.990402086Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:49.990640 containerd[1471]: time="2025-07-14T22:38:49.990515571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:50.033122 systemd[1]: Started cri-containerd-86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f.scope - libcontainer container 86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f. Jul 14 22:38:50.053289 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:38:50.087509 containerd[1471]: time="2025-07-14T22:38:50.087438952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9lnpr,Uid:23ede824-2f3c-4c4c-b760-db02257f0bab,Namespace:kube-system,Attempt:1,} returns sandbox id \"86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f\"" Jul 14 22:38:50.088358 kubelet[2604]: E0714 22:38:50.088326 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:50.378173 containerd[1471]: time="2025-07-14T22:38:50.377996396Z" level=info msg="CreateContainer within sandbox \"86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:38:50.434888 systemd[1]: Created slice kubepods-besteffort-pod3dd8327a_7b73_4cd7_8f10_b408a8e95309.slice - libcontainer container kubepods-besteffort-pod3dd8327a_7b73_4cd7_8f10_b408a8e95309.slice. Jul 14 22:38:50.474489 kubelet[2604]: I0714 22:38:50.473708 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3dd8327a-7b73-4cd7-8f10-b408a8e95309-whisker-backend-key-pair\") pod \"whisker-6c9bdc6668-k8xfc\" (UID: \"3dd8327a-7b73-4cd7-8f10-b408a8e95309\") " pod="calico-system/whisker-6c9bdc6668-k8xfc" Jul 14 22:38:50.474489 kubelet[2604]: I0714 22:38:50.473802 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3dd8327a-7b73-4cd7-8f10-b408a8e95309-whisker-ca-bundle\") pod \"whisker-6c9bdc6668-k8xfc\" (UID: \"3dd8327a-7b73-4cd7-8f10-b408a8e95309\") " pod="calico-system/whisker-6c9bdc6668-k8xfc" Jul 14 22:38:50.474489 kubelet[2604]: I0714 22:38:50.473866 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhcgr\" (UniqueName: \"kubernetes.io/projected/3dd8327a-7b73-4cd7-8f10-b408a8e95309-kube-api-access-vhcgr\") pod \"whisker-6c9bdc6668-k8xfc\" (UID: \"3dd8327a-7b73-4cd7-8f10-b408a8e95309\") " pod="calico-system/whisker-6c9bdc6668-k8xfc" Jul 14 22:38:50.474758 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:56080.service - OpenSSH per-connection server daemon (10.0.0.1:56080). Jul 14 22:38:50.494659 containerd[1471]: time="2025-07-14T22:38:50.490353912Z" level=info msg="CreateContainer within sandbox \"86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c7a909012bd4d72e85f67fb499189c3ff8f3490ac309aee683f45f03a0db20cf\"" Jul 14 22:38:50.503991 containerd[1471]: time="2025-07-14T22:38:50.497651268Z" level=info msg="StartContainer for \"c7a909012bd4d72e85f67fb499189c3ff8f3490ac309aee683f45f03a0db20cf\"" Jul 14 22:38:50.532195 kubelet[2604]: E0714 22:38:50.532143 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:50.544769 kubelet[2604]: I0714 22:38:50.544716 2604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be1ccdfc-351c-44ad-8eeb-7552d4e4f518" path="/var/lib/kubelet/pods/be1ccdfc-351c-44ad-8eeb-7552d4e4f518/volumes" Jul 14 22:38:50.556614 systemd[1]: Started cri-containerd-c7a909012bd4d72e85f67fb499189c3ff8f3490ac309aee683f45f03a0db20cf.scope - libcontainer container c7a909012bd4d72e85f67fb499189c3ff8f3490ac309aee683f45f03a0db20cf. Jul 14 22:38:50.575711 sshd[4245]: Accepted publickey for core from 10.0.0.1 port 56080 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:38:50.583807 sshd[4245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:38:50.594613 systemd-logind[1452]: New session 8 of user core. Jul 14 22:38:50.605329 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 22:38:50.681424 containerd[1471]: time="2025-07-14T22:38:50.681227770Z" level=info msg="StartContainer for \"c7a909012bd4d72e85f67fb499189c3ff8f3490ac309aee683f45f03a0db20cf\" returns successfully" Jul 14 22:38:50.741118 containerd[1471]: time="2025-07-14T22:38:50.741056023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c9bdc6668-k8xfc,Uid:3dd8327a-7b73-4cd7-8f10-b408a8e95309,Namespace:calico-system,Attempt:0,}" Jul 14 22:38:50.825580 kernel: bpftool[4424]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 22:38:50.860874 sshd[4245]: pam_unix(sshd:session): session closed for user core Jul 14 22:38:50.871783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262956782.mount: Deactivated successfully. Jul 14 22:38:50.873831 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:56080.service: Deactivated successfully. Jul 14 22:38:50.877325 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:38:50.880301 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:38:50.881688 systemd-logind[1452]: Removed session 8. Jul 14 22:38:50.931906 systemd-networkd[1403]: cali7878c77f52d: Link UP Jul 14 22:38:50.933146 systemd-networkd[1403]: cali7878c77f52d: Gained carrier Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.842 [INFO][4401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0 whisker-6c9bdc6668- calico-system 3dd8327a-7b73-4cd7-8f10-b408a8e95309 1069 0 2025-07-14 22:38:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c9bdc6668 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c9bdc6668-k8xfc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7878c77f52d [] [] }} ContainerID="6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" Namespace="calico-system" Pod="whisker-6c9bdc6668-k8xfc" WorkloadEndpoint="localhost-k8s-whisker--6c9bdc6668--k8xfc-" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.842 [INFO][4401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" Namespace="calico-system" Pod="whisker-6c9bdc6668-k8xfc" WorkloadEndpoint="localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.878 [INFO][4428] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" HandleID="k8s-pod-network.6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" Workload="localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.879 [INFO][4428] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" HandleID="k8s-pod-network.6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" Workload="localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c9bdc6668-k8xfc", "timestamp":"2025-07-14 22:38:50.878681526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.879 [INFO][4428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.880 [INFO][4428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.880 [INFO][4428] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.888 [INFO][4428] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" host="localhost" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.893 [INFO][4428] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.900 [INFO][4428] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.902 [INFO][4428] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.906 [INFO][4428] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.907 [INFO][4428] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" host="localhost" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.909 [INFO][4428] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317 Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.919 [INFO][4428] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" host="localhost" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.925 [INFO][4428] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" host="localhost" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.925 [INFO][4428] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" host="localhost" Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.925 [INFO][4428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:50.950112 containerd[1471]: 2025-07-14 22:38:50.925 [INFO][4428] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" HandleID="k8s-pod-network.6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" Workload="localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0" Jul 14 22:38:50.951985 containerd[1471]: 2025-07-14 22:38:50.929 [INFO][4401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" Namespace="calico-system" Pod="whisker-6c9bdc6668-k8xfc" WorkloadEndpoint="localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0", GenerateName:"whisker-6c9bdc6668-", Namespace:"calico-system", SelfLink:"", UID:"3dd8327a-7b73-4cd7-8f10-b408a8e95309", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c9bdc6668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c9bdc6668-k8xfc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7878c77f52d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:50.951985 containerd[1471]: 2025-07-14 22:38:50.929 [INFO][4401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" Namespace="calico-system" Pod="whisker-6c9bdc6668-k8xfc" WorkloadEndpoint="localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0" Jul 14 22:38:50.951985 containerd[1471]: 2025-07-14 22:38:50.930 [INFO][4401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7878c77f52d ContainerID="6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" Namespace="calico-system" Pod="whisker-6c9bdc6668-k8xfc" WorkloadEndpoint="localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0" Jul 14 22:38:50.951985 containerd[1471]: 2025-07-14 22:38:50.932 [INFO][4401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" Namespace="calico-system" Pod="whisker-6c9bdc6668-k8xfc" WorkloadEndpoint="localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0" Jul 14 22:38:50.951985 containerd[1471]: 2025-07-14 22:38:50.933 [INFO][4401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" Namespace="calico-system" Pod="whisker-6c9bdc6668-k8xfc" WorkloadEndpoint="localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0", GenerateName:"whisker-6c9bdc6668-", Namespace:"calico-system", SelfLink:"", UID:"3dd8327a-7b73-4cd7-8f10-b408a8e95309", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c9bdc6668", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317", Pod:"whisker-6c9bdc6668-k8xfc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7878c77f52d", MAC:"02:9d:2a:da:b5:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:50.951985 containerd[1471]: 2025-07-14 22:38:50.945 [INFO][4401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317" Namespace="calico-system" Pod="whisker-6c9bdc6668-k8xfc" WorkloadEndpoint="localhost-k8s-whisker--6c9bdc6668--k8xfc-eth0" Jul 14 22:38:50.988584 kubelet[2604]: E0714 22:38:50.988532 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:50.996555 containerd[1471]: time="2025-07-14T22:38:50.995486020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:38:50.996555 containerd[1471]: time="2025-07-14T22:38:50.996300239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:38:50.996555 containerd[1471]: time="2025-07-14T22:38:50.996346697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:50.996740 containerd[1471]: time="2025-07-14T22:38:50.996640612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:51.009670 kubelet[2604]: I0714 22:38:51.009600 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9lnpr" podStartSLOduration=67.009583799 podStartE2EDuration="1m7.009583799s" podCreationTimestamp="2025-07-14 22:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:38:51.008551858 +0000 UTC m=+70.570856494" watchObservedRunningTime="2025-07-14 22:38:51.009583799 +0000 UTC m=+70.571888435" Jul 14 22:38:51.031887 systemd[1]: Started cri-containerd-6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317.scope - libcontainer container 6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317. Jul 14 22:38:51.058440 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:38:51.099539 containerd[1471]: time="2025-07-14T22:38:51.097920462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c9bdc6668-k8xfc,Uid:3dd8327a-7b73-4cd7-8f10-b408a8e95309,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317\"" Jul 14 22:38:51.148635 systemd-networkd[1403]: vxlan.calico: Link UP Jul 14 22:38:51.148647 systemd-networkd[1403]: vxlan.calico: Gained carrier Jul 14 22:38:51.241598 systemd-networkd[1403]: caliab2774d67e4: Gained IPv6LL Jul 14 22:38:51.522703 containerd[1471]: time="2025-07-14T22:38:51.522655641Z" level=info msg="StopPodSandbox for \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\"" Jul 14 22:38:51.523490 containerd[1471]: time="2025-07-14T22:38:51.523180133Z" level=info msg="StopPodSandbox for \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\"" Jul 14 22:38:51.523622 containerd[1471]: time="2025-07-14T22:38:51.523538700Z" level=info msg="StopPodSandbox for \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\"" Jul 14 22:38:51.625671 systemd-networkd[1403]: calidab2dcf2b55: Gained IPv6LL Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.640 [INFO][4603] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.641 [INFO][4603] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" iface="eth0" netns="/var/run/netns/cni-3e8f2dff-dff3-57c6-683f-53cb5533a763" Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.641 [INFO][4603] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" iface="eth0" netns="/var/run/netns/cni-3e8f2dff-dff3-57c6-683f-53cb5533a763" Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.642 [INFO][4603] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" iface="eth0" netns="/var/run/netns/cni-3e8f2dff-dff3-57c6-683f-53cb5533a763" Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.643 [INFO][4603] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.643 [INFO][4603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.699 [INFO][4622] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" HandleID="k8s-pod-network.53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.701 [INFO][4622] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.702 [INFO][4622] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.714 [WARNING][4622] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" HandleID="k8s-pod-network.53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.714 [INFO][4622] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" HandleID="k8s-pod-network.53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.716 [INFO][4622] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:51.722801 containerd[1471]: 2025-07-14 22:38:51.719 [INFO][4603] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:38:51.724855 containerd[1471]: time="2025-07-14T22:38:51.723981064Z" level=info msg="TearDown network for sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\" successfully" Jul 14 22:38:51.724855 containerd[1471]: time="2025-07-14T22:38:51.724009118Z" level=info msg="StopPodSandbox for \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\" returns successfully" Jul 14 22:38:51.726024 containerd[1471]: time="2025-07-14T22:38:51.725990493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4z895,Uid:b8c6f0a5-a93f-42f8-a410-795cf33b659f,Namespace:calico-system,Attempt:1,}" Jul 14 22:38:51.729200 systemd[1]: run-netns-cni\x2d3e8f2dff\x2ddff3\x2d57c6\x2d683f\x2d53cb5533a763.mount: Deactivated successfully. Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.677 [INFO][4589] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.677 [INFO][4589] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" iface="eth0" netns="/var/run/netns/cni-62e59e9c-41c7-03f2-8dbf-0fd2b9faff7d" Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.677 [INFO][4589] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" iface="eth0" netns="/var/run/netns/cni-62e59e9c-41c7-03f2-8dbf-0fd2b9faff7d" Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.678 [INFO][4589] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" iface="eth0" netns="/var/run/netns/cni-62e59e9c-41c7-03f2-8dbf-0fd2b9faff7d" Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.678 [INFO][4589] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.678 [INFO][4589] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.719 [INFO][4644] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" HandleID="k8s-pod-network.fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.720 [INFO][4644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.720 [INFO][4644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.727 [WARNING][4644] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" HandleID="k8s-pod-network.fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.727 [INFO][4644] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" HandleID="k8s-pod-network.fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.729 [INFO][4644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:51.738172 containerd[1471]: 2025-07-14 22:38:51.734 [INFO][4589] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:38:51.738797 containerd[1471]: time="2025-07-14T22:38:51.738424475Z" level=info msg="TearDown network for sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\" successfully" Jul 14 22:38:51.738797 containerd[1471]: time="2025-07-14T22:38:51.738488687Z" level=info msg="StopPodSandbox for \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\" returns successfully" Jul 14 22:38:51.739669 containerd[1471]: time="2025-07-14T22:38:51.739148483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnmm5,Uid:12e54208-f5d7-4225-a878-cbfd7ce81981,Namespace:calico-system,Attempt:1,}" Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.667 [INFO][4590] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.668 [INFO][4590] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" iface="eth0" netns="/var/run/netns/cni-2b26223f-df1c-c427-e255-8150cdb5859b" Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.668 [INFO][4590] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" iface="eth0" netns="/var/run/netns/cni-2b26223f-df1c-c427-e255-8150cdb5859b" Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.668 [INFO][4590] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" iface="eth0" netns="/var/run/netns/cni-2b26223f-df1c-c427-e255-8150cdb5859b" Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.669 [INFO][4590] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.669 [INFO][4590] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.727 [INFO][4636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" HandleID="k8s-pod-network.043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.727 [INFO][4636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.729 [INFO][4636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.738 [WARNING][4636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" HandleID="k8s-pod-network.043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.738 [INFO][4636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" HandleID="k8s-pod-network.043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.740 [INFO][4636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:51.747081 containerd[1471]: 2025-07-14 22:38:51.743 [INFO][4590] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:38:51.747593 containerd[1471]: time="2025-07-14T22:38:51.747276357Z" level=info msg="TearDown network for sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\" successfully" Jul 14 22:38:51.747593 containerd[1471]: time="2025-07-14T22:38:51.747302237Z" level=info msg="StopPodSandbox for \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\" returns successfully" Jul 14 22:38:51.747739 kubelet[2604]: E0714 22:38:51.747702 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:51.749022 containerd[1471]: time="2025-07-14T22:38:51.748962384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zhgwx,Uid:29dad085-0fbc-4a72-8160-b942ebda8dbc,Namespace:kube-system,Attempt:1,}" Jul 14 22:38:51.869622 systemd[1]: run-netns-cni\x2d2b26223f\x2ddf1c\x2dc427\x2de255\x2d8150cdb5859b.mount: Deactivated successfully. Jul 14 22:38:51.869744 systemd[1]: run-netns-cni\x2d62e59e9c\x2d41c7\x2d03f2\x2d8dbf\x2d0fd2b9faff7d.mount: Deactivated successfully. Jul 14 22:38:51.876398 systemd-networkd[1403]: cali10c398e07ad: Link UP Jul 14 22:38:51.877310 systemd-networkd[1403]: cali10c398e07ad: Gained carrier Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.799 [INFO][4676] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--4z895-eth0 goldmane-768f4c5c69- calico-system b8c6f0a5-a93f-42f8-a410-795cf33b659f 1111 0 2025-07-14 22:38:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-4z895 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali10c398e07ad [] [] }} ContainerID="13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" Namespace="calico-system" Pod="goldmane-768f4c5c69-4z895" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4z895-" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.799 [INFO][4676] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" Namespace="calico-system" Pod="goldmane-768f4c5c69-4z895" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.830 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" HandleID="k8s-pod-network.13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.830 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" HandleID="k8s-pod-network.13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-4z895", "timestamp":"2025-07-14 22:38:51.830737745 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.830 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.831 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.831 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.840 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" host="localhost" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.845 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.850 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.851 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.853 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.853 [INFO][4715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" host="localhost" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.854 [INFO][4715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26 Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.858 [INFO][4715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" host="localhost" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.864 [INFO][4715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" host="localhost" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.864 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" host="localhost" Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.864 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:51.909485 containerd[1471]: 2025-07-14 22:38:51.864 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" HandleID="k8s-pod-network.13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:38:51.910251 containerd[1471]: 2025-07-14 22:38:51.872 [INFO][4676] cni-plugin/k8s.go 418: Populated endpoint ContainerID="13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" Namespace="calico-system" Pod="goldmane-768f4c5c69-4z895" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--4z895-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b8c6f0a5-a93f-42f8-a410-795cf33b659f", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-4z895", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali10c398e07ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:51.910251 containerd[1471]: 2025-07-14 22:38:51.872 [INFO][4676] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" Namespace="calico-system" Pod="goldmane-768f4c5c69-4z895" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:38:51.910251 containerd[1471]: 2025-07-14 22:38:51.872 [INFO][4676] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10c398e07ad ContainerID="13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" Namespace="calico-system" Pod="goldmane-768f4c5c69-4z895" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:38:51.910251 containerd[1471]: 2025-07-14 22:38:51.877 [INFO][4676] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" Namespace="calico-system" Pod="goldmane-768f4c5c69-4z895" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:38:51.910251 containerd[1471]: 2025-07-14 22:38:51.878 [INFO][4676] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" Namespace="calico-system" Pod="goldmane-768f4c5c69-4z895" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--4z895-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b8c6f0a5-a93f-42f8-a410-795cf33b659f", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26", Pod:"goldmane-768f4c5c69-4z895", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali10c398e07ad", MAC:"5a:46:92:0c:76:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:51.910251 containerd[1471]: 2025-07-14 22:38:51.895 [INFO][4676] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26" Namespace="calico-system" Pod="goldmane-768f4c5c69-4z895" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:38:51.944103 containerd[1471]: time="2025-07-14T22:38:51.943621687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:38:51.944103 containerd[1471]: time="2025-07-14T22:38:51.943704704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:38:51.944103 containerd[1471]: time="2025-07-14T22:38:51.943718941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:51.944282 containerd[1471]: time="2025-07-14T22:38:51.944145917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:51.971615 systemd[1]: Started cri-containerd-13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26.scope - libcontainer container 13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26. Jul 14 22:38:51.991871 kubelet[2604]: E0714 22:38:51.991822 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:51.992507 systemd-networkd[1403]: calie30dbbec646: Link UP Jul 14 22:38:51.995531 systemd-networkd[1403]: calie30dbbec646: Gained carrier Jul 14 22:38:52.000658 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.832 [INFO][4695] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rnmm5-eth0 csi-node-driver- calico-system 12e54208-f5d7-4225-a878-cbfd7ce81981 1113 0 2025-07-14 22:38:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rnmm5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie30dbbec646 [] [] }} ContainerID="f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" Namespace="calico-system" Pod="csi-node-driver-rnmm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnmm5-" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.832 [INFO][4695] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" Namespace="calico-system" Pod="csi-node-driver-rnmm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.859 [INFO][4730] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" HandleID="k8s-pod-network.f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.859 [INFO][4730] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" HandleID="k8s-pod-network.f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139420), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rnmm5", "timestamp":"2025-07-14 22:38:51.8596954 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.859 [INFO][4730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.865 [INFO][4730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.865 [INFO][4730] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.942 [INFO][4730] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" host="localhost" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.953 [INFO][4730] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.960 [INFO][4730] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.963 [INFO][4730] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.965 [INFO][4730] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.965 [INFO][4730] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" host="localhost" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.967 [INFO][4730] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413 Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.974 [INFO][4730] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" host="localhost" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.980 [INFO][4730] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" host="localhost" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.980 [INFO][4730] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" host="localhost" Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.980 [INFO][4730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:52.016998 containerd[1471]: 2025-07-14 22:38:51.980 [INFO][4730] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" HandleID="k8s-pod-network.f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:38:52.017952 containerd[1471]: 2025-07-14 22:38:51.983 [INFO][4695] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" Namespace="calico-system" Pod="csi-node-driver-rnmm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnmm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rnmm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12e54208-f5d7-4225-a878-cbfd7ce81981", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rnmm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie30dbbec646", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:52.017952 containerd[1471]: 2025-07-14 22:38:51.984 [INFO][4695] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" Namespace="calico-system" Pod="csi-node-driver-rnmm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:38:52.017952 containerd[1471]: 2025-07-14 22:38:51.984 [INFO][4695] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie30dbbec646 ContainerID="f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" Namespace="calico-system" Pod="csi-node-driver-rnmm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:38:52.017952 containerd[1471]: 2025-07-14 22:38:51.997 [INFO][4695] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" Namespace="calico-system" Pod="csi-node-driver-rnmm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:38:52.017952 containerd[1471]: 2025-07-14 22:38:51.997 [INFO][4695] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" Namespace="calico-system" Pod="csi-node-driver-rnmm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnmm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rnmm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12e54208-f5d7-4225-a878-cbfd7ce81981", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413", Pod:"csi-node-driver-rnmm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie30dbbec646", MAC:"56:94:09:fb:66:bf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:52.017952 containerd[1471]: 2025-07-14 22:38:52.010 [INFO][4695] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413" Namespace="calico-system" Pod="csi-node-driver-rnmm5" WorkloadEndpoint="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:38:52.049475 containerd[1471]: time="2025-07-14T22:38:52.048758804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-4z895,Uid:b8c6f0a5-a93f-42f8-a410-795cf33b659f,Namespace:calico-system,Attempt:1,} returns sandbox id \"13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26\"" Jul 14 22:38:52.054658 containerd[1471]: time="2025-07-14T22:38:52.053918706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:38:52.055439 containerd[1471]: time="2025-07-14T22:38:52.055355061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:38:52.055439 containerd[1471]: time="2025-07-14T22:38:52.055376561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:52.056138 containerd[1471]: time="2025-07-14T22:38:52.055872218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:52.086774 systemd[1]: Started cri-containerd-f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413.scope - libcontainer container f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413. Jul 14 22:38:52.101863 systemd-networkd[1403]: calie57ed54e5e6: Link UP Jul 14 22:38:52.103890 systemd-networkd[1403]: calie57ed54e5e6: Gained carrier Jul 14 22:38:52.111115 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:51.820 [INFO][4689] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0 coredns-674b8bbfcf- kube-system 29dad085-0fbc-4a72-8160-b942ebda8dbc 1112 0 2025-07-14 22:37:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zhgwx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie57ed54e5e6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhgwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhgwx-" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:51.822 [INFO][4689] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhgwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:51.875 [INFO][4728] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" HandleID="k8s-pod-network.0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:51.876 [INFO][4728] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" HandleID="k8s-pod-network.0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000387e00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zhgwx", "timestamp":"2025-07-14 22:38:51.875166033 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:51.876 [INFO][4728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:51.980 [INFO][4728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:51.980 [INFO][4728] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.043 [INFO][4728] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" host="localhost" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.054 [INFO][4728] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.061 [INFO][4728] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.064 [INFO][4728] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.066 [INFO][4728] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.066 [INFO][4728] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" host="localhost" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.068 [INFO][4728] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.072 [INFO][4728] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" host="localhost" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.080 [INFO][4728] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" host="localhost" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.080 [INFO][4728] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" host="localhost" Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.080 [INFO][4728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:52.123100 containerd[1471]: 2025-07-14 22:38:52.080 [INFO][4728] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" HandleID="k8s-pod-network.0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:38:52.123701 containerd[1471]: 2025-07-14 22:38:52.085 [INFO][4689] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhgwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"29dad085-0fbc-4a72-8160-b942ebda8dbc", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zhgwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie57ed54e5e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:52.123701 containerd[1471]: 2025-07-14 22:38:52.085 [INFO][4689] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhgwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:38:52.123701 containerd[1471]: 2025-07-14 22:38:52.085 [INFO][4689] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie57ed54e5e6 ContainerID="0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhgwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:38:52.123701 containerd[1471]: 2025-07-14 22:38:52.102 [INFO][4689] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhgwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:38:52.123701 containerd[1471]: 2025-07-14 22:38:52.103 [INFO][4689] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhgwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"29dad085-0fbc-4a72-8160-b942ebda8dbc", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a", Pod:"coredns-674b8bbfcf-zhgwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie57ed54e5e6", MAC:"c6:a8:85:62:15:81", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:52.123701 containerd[1471]: 2025-07-14 22:38:52.116 [INFO][4689] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a" Namespace="kube-system" Pod="coredns-674b8bbfcf-zhgwx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:38:52.130224 containerd[1471]: time="2025-07-14T22:38:52.130168343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rnmm5,Uid:12e54208-f5d7-4225-a878-cbfd7ce81981,Namespace:calico-system,Attempt:1,} returns sandbox id \"f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413\"" Jul 14 22:38:52.151988 containerd[1471]: time="2025-07-14T22:38:52.151901918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:38:52.151988 containerd[1471]: time="2025-07-14T22:38:52.151949979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:38:52.151988 containerd[1471]: time="2025-07-14T22:38:52.151959817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:52.152428 containerd[1471]: time="2025-07-14T22:38:52.152240197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:52.176702 systemd[1]: Started cri-containerd-0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a.scope - libcontainer container 0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a. Jul 14 22:38:52.191560 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:38:52.221254 containerd[1471]: time="2025-07-14T22:38:52.221172555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zhgwx,Uid:29dad085-0fbc-4a72-8160-b942ebda8dbc,Namespace:kube-system,Attempt:1,} returns sandbox id \"0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a\"" Jul 14 22:38:52.222418 kubelet[2604]: E0714 22:38:52.222376 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:52.230263 containerd[1471]: time="2025-07-14T22:38:52.230201149Z" level=info msg="CreateContainer within sandbox \"0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:38:52.257070 containerd[1471]: time="2025-07-14T22:38:52.257010026Z" level=info msg="CreateContainer within sandbox \"0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4c29961dca28a3634a9a781507d84101ebf8ab4ef5803c3b67a80d3467f8fb4\"" Jul 14 22:38:52.258770 containerd[1471]: time="2025-07-14T22:38:52.257726430Z" level=info msg="StartContainer for \"f4c29961dca28a3634a9a781507d84101ebf8ab4ef5803c3b67a80d3467f8fb4\"" Jul 14 22:38:52.298616 systemd[1]: Started cri-containerd-f4c29961dca28a3634a9a781507d84101ebf8ab4ef5803c3b67a80d3467f8fb4.scope - libcontainer container f4c29961dca28a3634a9a781507d84101ebf8ab4ef5803c3b67a80d3467f8fb4. Jul 14 22:38:52.335996 containerd[1471]: time="2025-07-14T22:38:52.335935580Z" level=info msg="StartContainer for \"f4c29961dca28a3634a9a781507d84101ebf8ab4ef5803c3b67a80d3467f8fb4\" returns successfully" Jul 14 22:38:52.523422 containerd[1471]: time="2025-07-14T22:38:52.523068369Z" level=info msg="StopPodSandbox for \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\"" Jul 14 22:38:52.523422 containerd[1471]: time="2025-07-14T22:38:52.523125627Z" level=info msg="StopPodSandbox for \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\"" Jul 14 22:38:52.778192 systemd-networkd[1403]: vxlan.calico: Gained IPv6LL Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.739 [INFO][4963] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.740 [INFO][4963] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" iface="eth0" netns="/var/run/netns/cni-1d9fb9c8-769a-43c0-0668-0be3b2c11131" Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.740 [INFO][4963] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" iface="eth0" netns="/var/run/netns/cni-1d9fb9c8-769a-43c0-0668-0be3b2c11131" Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.740 [INFO][4963] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" iface="eth0" netns="/var/run/netns/cni-1d9fb9c8-769a-43c0-0668-0be3b2c11131" Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.740 [INFO][4963] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.740 [INFO][4963] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.764 [INFO][4980] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" HandleID="k8s-pod-network.bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.765 [INFO][4980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.765 [INFO][4980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.772 [WARNING][4980] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" HandleID="k8s-pod-network.bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.772 [INFO][4980] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" HandleID="k8s-pod-network.bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.773 [INFO][4980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:52.781305 containerd[1471]: 2025-07-14 22:38:52.777 [INFO][4963] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:38:52.781809 containerd[1471]: time="2025-07-14T22:38:52.781628341Z" level=info msg="TearDown network for sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\" successfully" Jul 14 22:38:52.781809 containerd[1471]: time="2025-07-14T22:38:52.781663266Z" level=info msg="StopPodSandbox for \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\" returns successfully" Jul 14 22:38:52.782611 containerd[1471]: time="2025-07-14T22:38:52.782557016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d69cdc74-2pbv4,Uid:61b0e57f-63f8-4046-911a-210a3070cdd1,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:38:52.867884 systemd[1]: run-netns-cni\x2d1d9fb9c8\x2d769a\x2d43c0\x2d0668\x2d0be3b2c11131.mount: Deactivated successfully. Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.862 [INFO][4962] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.863 [INFO][4962] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" iface="eth0" netns="/var/run/netns/cni-20f4131d-0e66-7f30-836a-104fde4b5866" Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.863 [INFO][4962] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" iface="eth0" netns="/var/run/netns/cni-20f4131d-0e66-7f30-836a-104fde4b5866" Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.863 [INFO][4962] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" iface="eth0" netns="/var/run/netns/cni-20f4131d-0e66-7f30-836a-104fde4b5866" Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.863 [INFO][4962] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.864 [INFO][4962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.890 [INFO][4989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" HandleID="k8s-pod-network.abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.890 [INFO][4989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.891 [INFO][4989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.898 [WARNING][4989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" HandleID="k8s-pod-network.abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.898 [INFO][4989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" HandleID="k8s-pod-network.abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.900 [INFO][4989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:52.911977 containerd[1471]: 2025-07-14 22:38:52.906 [INFO][4962] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:38:52.915515 containerd[1471]: time="2025-07-14T22:38:52.912472628Z" level=info msg="TearDown network for sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\" successfully" Jul 14 22:38:52.915515 containerd[1471]: time="2025-07-14T22:38:52.912515029Z" level=info msg="StopPodSandbox for \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\" returns successfully" Jul 14 22:38:52.916070 systemd[1]: run-netns-cni\x2d20f4131d\x2d0e66\x2d7f30\x2d836a\x2d104fde4b5866.mount: Deactivated successfully. Jul 14 22:38:52.916802 containerd[1471]: time="2025-07-14T22:38:52.916547048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798dc4cdb-fh8g7,Uid:121a6ca1-b03e-4bca-84d8-4cf70c6b267d,Namespace:calico-system,Attempt:1,}" Jul 14 22:38:52.971361 systemd-networkd[1403]: cali7878c77f52d: Gained IPv6LL Jul 14 22:38:53.008958 kubelet[2604]: E0714 22:38:53.008840 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:53.010119 kubelet[2604]: E0714 22:38:53.010067 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:53.050831 kubelet[2604]: I0714 22:38:53.050622 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zhgwx" podStartSLOduration=69.05060131 podStartE2EDuration="1m9.05060131s" podCreationTimestamp="2025-07-14 22:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:38:53.030812195 +0000 UTC m=+72.593116831" watchObservedRunningTime="2025-07-14 22:38:53.05060131 +0000 UTC m=+72.612905946" Jul 14 22:38:53.060552 systemd-networkd[1403]: calif995be761ac: Link UP Jul 14 22:38:53.063029 systemd-networkd[1403]: calif995be761ac: Gained carrier Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:52.948 [INFO][5001] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0 calico-apiserver-d69cdc74- calico-apiserver 61b0e57f-63f8-4046-911a-210a3070cdd1 1139 0 2025-07-14 22:38:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d69cdc74 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d69cdc74-2pbv4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif995be761ac [] [] }} ContainerID="c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-2pbv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:52.949 [INFO][5001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-2pbv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:52.983 [INFO][5027] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" HandleID="k8s-pod-network.c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:52.983 [INFO][5027] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" HandleID="k8s-pod-network.c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b7690), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d69cdc74-2pbv4", "timestamp":"2025-07-14 22:38:52.983673774 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:52.983 [INFO][5027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:52.984 [INFO][5027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:52.984 [INFO][5027] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:52.990 [INFO][5027] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" host="localhost" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:52.997 [INFO][5027] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:53.007 [INFO][5027] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:53.010 [INFO][5027] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:53.015 [INFO][5027] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:53.015 [INFO][5027] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" host="localhost" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:53.023 [INFO][5027] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:53.034 [INFO][5027] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" host="localhost" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:53.045 [INFO][5027] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" host="localhost" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:53.045 [INFO][5027] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" host="localhost" Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:53.046 [INFO][5027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:53.082482 containerd[1471]: 2025-07-14 22:38:53.047 [INFO][5027] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" HandleID="k8s-pod-network.c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:38:53.083414 containerd[1471]: 2025-07-14 22:38:53.056 [INFO][5001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-2pbv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0", GenerateName:"calico-apiserver-d69cdc74-", Namespace:"calico-apiserver", SelfLink:"", UID:"61b0e57f-63f8-4046-911a-210a3070cdd1", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d69cdc74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d69cdc74-2pbv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif995be761ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:53.083414 containerd[1471]: 2025-07-14 22:38:53.056 [INFO][5001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-2pbv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:38:53.083414 containerd[1471]: 2025-07-14 22:38:53.056 [INFO][5001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif995be761ac ContainerID="c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-2pbv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:38:53.083414 containerd[1471]: 2025-07-14 22:38:53.064 [INFO][5001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-2pbv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:38:53.083414 containerd[1471]: 2025-07-14 22:38:53.066 [INFO][5001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-2pbv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0", GenerateName:"calico-apiserver-d69cdc74-", Namespace:"calico-apiserver", SelfLink:"", UID:"61b0e57f-63f8-4046-911a-210a3070cdd1", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d69cdc74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f", Pod:"calico-apiserver-d69cdc74-2pbv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif995be761ac", MAC:"12:58:36:77:ee:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:53.083414 containerd[1471]: 2025-07-14 22:38:53.078 [INFO][5001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f" Namespace="calico-apiserver" Pod="calico-apiserver-d69cdc74-2pbv4" WorkloadEndpoint="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:38:53.132924 containerd[1471]: time="2025-07-14T22:38:53.132380488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:38:53.132924 containerd[1471]: time="2025-07-14T22:38:53.132514842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:38:53.132924 containerd[1471]: time="2025-07-14T22:38:53.132536643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:53.132924 containerd[1471]: time="2025-07-14T22:38:53.132675065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:53.145908 systemd-networkd[1403]: cali6b132486e0a: Link UP Jul 14 22:38:53.155952 systemd-networkd[1403]: cali6b132486e0a: Gained carrier Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:52.979 [INFO][5016] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0 calico-kube-controllers-798dc4cdb- calico-system 121a6ca1-b03e-4bca-84d8-4cf70c6b267d 1140 0 2025-07-14 22:38:07 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:798dc4cdb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-798dc4cdb-fh8g7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6b132486e0a [] [] }} ContainerID="2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" Namespace="calico-system" Pod="calico-kube-controllers-798dc4cdb-fh8g7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:52.979 [INFO][5016] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" Namespace="calico-system" Pod="calico-kube-controllers-798dc4cdb-fh8g7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.027 [INFO][5037] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" HandleID="k8s-pod-network.2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.028 [INFO][5037] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" HandleID="k8s-pod-network.2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e760), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-798dc4cdb-fh8g7", "timestamp":"2025-07-14 22:38:53.027959214 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.028 [INFO][5037] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.048 [INFO][5037] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.048 [INFO][5037] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.094 [INFO][5037] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" host="localhost" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.102 [INFO][5037] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.109 [INFO][5037] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.111 [INFO][5037] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.115 [INFO][5037] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.115 [INFO][5037] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" host="localhost" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.117 [INFO][5037] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303 Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.123 [INFO][5037] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" host="localhost" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.134 [INFO][5037] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" host="localhost" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.134 [INFO][5037] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" host="localhost" Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.134 [INFO][5037] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:38:53.177849 containerd[1471]: 2025-07-14 22:38:53.134 [INFO][5037] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" HandleID="k8s-pod-network.2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:38:53.178465 containerd[1471]: 2025-07-14 22:38:53.140 [INFO][5016] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" Namespace="calico-system" Pod="calico-kube-controllers-798dc4cdb-fh8g7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0", GenerateName:"calico-kube-controllers-798dc4cdb-", Namespace:"calico-system", SelfLink:"", UID:"121a6ca1-b03e-4bca-84d8-4cf70c6b267d", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798dc4cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-798dc4cdb-fh8g7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b132486e0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:53.178465 containerd[1471]: 2025-07-14 22:38:53.140 [INFO][5016] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" Namespace="calico-system" Pod="calico-kube-controllers-798dc4cdb-fh8g7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:38:53.178465 containerd[1471]: 2025-07-14 22:38:53.140 [INFO][5016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6b132486e0a ContainerID="2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" Namespace="calico-system" Pod="calico-kube-controllers-798dc4cdb-fh8g7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:38:53.178465 containerd[1471]: 2025-07-14 22:38:53.156 [INFO][5016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" Namespace="calico-system" Pod="calico-kube-controllers-798dc4cdb-fh8g7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:38:53.178465 containerd[1471]: 2025-07-14 22:38:53.157 [INFO][5016] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" Namespace="calico-system" Pod="calico-kube-controllers-798dc4cdb-fh8g7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0", GenerateName:"calico-kube-controllers-798dc4cdb-", Namespace:"calico-system", SelfLink:"", UID:"121a6ca1-b03e-4bca-84d8-4cf70c6b267d", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798dc4cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303", Pod:"calico-kube-controllers-798dc4cdb-fh8g7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b132486e0a", MAC:"0e:9d:d3:02:6b:aa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:38:53.178465 containerd[1471]: 2025-07-14 22:38:53.172 [INFO][5016] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303" Namespace="calico-system" Pod="calico-kube-controllers-798dc4cdb-fh8g7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:38:53.181161 systemd[1]: Started cri-containerd-c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f.scope - libcontainer container c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f. Jul 14 22:38:53.201531 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:38:53.218671 containerd[1471]: time="2025-07-14T22:38:53.218469960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:38:53.218671 containerd[1471]: time="2025-07-14T22:38:53.218614724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:38:53.219676 containerd[1471]: time="2025-07-14T22:38:53.219223565Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:53.219676 containerd[1471]: time="2025-07-14T22:38:53.219419324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:38:53.226411 systemd-networkd[1403]: calie30dbbec646: Gained IPv6LL Jul 14 22:38:53.227265 systemd-networkd[1403]: calie57ed54e5e6: Gained IPv6LL Jul 14 22:38:53.239400 containerd[1471]: time="2025-07-14T22:38:53.239263023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d69cdc74-2pbv4,Uid:61b0e57f-63f8-4046-911a-210a3070cdd1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f\"" Jul 14 22:38:53.252689 systemd[1]: Started cri-containerd-2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303.scope - libcontainer container 2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303. Jul 14 22:38:53.268529 systemd-resolved[1336]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:38:53.292972 containerd[1471]: time="2025-07-14T22:38:53.292933704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-798dc4cdb-fh8g7,Uid:121a6ca1-b03e-4bca-84d8-4cf70c6b267d,Namespace:calico-system,Attempt:1,} returns sandbox id \"2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303\"" Jul 14 22:38:53.417631 systemd-networkd[1403]: cali10c398e07ad: Gained IPv6LL Jul 14 22:38:53.799107 containerd[1471]: time="2025-07-14T22:38:53.799045370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:53.800069 containerd[1471]: time="2025-07-14T22:38:53.800006796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 14 22:38:53.803051 containerd[1471]: time="2025-07-14T22:38:53.803017476Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:53.806239 containerd[1471]: time="2025-07-14T22:38:53.806199739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:38:53.807044 containerd[1471]: time="2025-07-14T22:38:53.806993329Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.919993558s" Jul 14 22:38:53.807044 containerd[1471]: time="2025-07-14T22:38:53.807028244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:38:53.808264 containerd[1471]: time="2025-07-14T22:38:53.808239093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 22:38:53.813498 containerd[1471]: time="2025-07-14T22:38:53.813428249Z" level=info msg="CreateContainer within sandbox \"121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:38:53.838832 containerd[1471]: time="2025-07-14T22:38:53.838770167Z" level=info msg="CreateContainer within sandbox \"121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"decaa1839b36890259713414fa3d05ec3a4e201eaf7a80d6baa7e999feb9d8cd\"" Jul 14 22:38:53.839437 containerd[1471]: time="2025-07-14T22:38:53.839408434Z" level=info msg="StartContainer for \"decaa1839b36890259713414fa3d05ec3a4e201eaf7a80d6baa7e999feb9d8cd\"" Jul 14 22:38:53.890675 systemd[1]: Started cri-containerd-decaa1839b36890259713414fa3d05ec3a4e201eaf7a80d6baa7e999feb9d8cd.scope - libcontainer container decaa1839b36890259713414fa3d05ec3a4e201eaf7a80d6baa7e999feb9d8cd. Jul 14 22:38:53.938304 containerd[1471]: time="2025-07-14T22:38:53.938252659Z" level=info msg="StartContainer for \"decaa1839b36890259713414fa3d05ec3a4e201eaf7a80d6baa7e999feb9d8cd\" returns successfully" Jul 14 22:38:54.014823 kubelet[2604]: E0714 22:38:54.014712 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:54.505633 systemd-networkd[1403]: calif995be761ac: Gained IPv6LL Jul 14 22:38:54.524623 kubelet[2604]: E0714 22:38:54.524584 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:54.953886 systemd-networkd[1403]: cali6b132486e0a: Gained IPv6LL Jul 14 22:38:55.016751 kubelet[2604]: E0714 22:38:55.016715 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:38:55.880006 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:56168.service - OpenSSH per-connection server daemon (10.0.0.1:56168). Jul 14 22:38:56.261114 sshd[5199]: Accepted publickey for core from 10.0.0.1 port 56168 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:38:56.263092 sshd[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:38:56.267655 systemd-logind[1452]: New session 9 of user core. Jul 14 22:38:56.281710 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 22:38:56.632803 sshd[5199]: pam_unix(sshd:session): session closed for user core Jul 14 22:38:56.637174 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:56168.service: Deactivated successfully. Jul 14 22:38:56.639278 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:38:56.640012 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:38:56.640902 systemd-logind[1452]: Removed session 9. Jul 14 22:39:00.694772 kubelet[2604]: I0714 22:39:00.694555 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d69cdc74-djhwk" podStartSLOduration=52.773454535 podStartE2EDuration="56.694529767s" podCreationTimestamp="2025-07-14 22:38:04 +0000 UTC" firstStartedPulling="2025-07-14 22:38:49.886676279 +0000 UTC m=+69.448980915" lastFinishedPulling="2025-07-14 22:38:53.807751501 +0000 UTC m=+73.370056147" observedRunningTime="2025-07-14 22:38:54.149858894 +0000 UTC m=+73.712163530" watchObservedRunningTime="2025-07-14 22:39:00.694529767 +0000 UTC m=+80.256834403" Jul 14 22:39:00.917057 containerd[1471]: time="2025-07-14T22:39:00.916971919Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:00.933322 containerd[1471]: time="2025-07-14T22:39:00.933240396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 14 22:39:00.972193 containerd[1471]: time="2025-07-14T22:39:00.971990313Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:01.005330 containerd[1471]: time="2025-07-14T22:39:01.005234824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:01.006032 containerd[1471]: time="2025-07-14T22:39:01.005980502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 7.197714869s" Jul 14 22:39:01.006032 containerd[1471]: time="2025-07-14T22:39:01.006029224Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 14 22:39:01.007880 containerd[1471]: time="2025-07-14T22:39:01.007647208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 22:39:01.098814 containerd[1471]: time="2025-07-14T22:39:01.098766822Z" level=info msg="CreateContainer within sandbox \"6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 22:39:01.165483 containerd[1471]: time="2025-07-14T22:39:01.165400896Z" level=info msg="CreateContainer within sandbox \"6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a9e3f2d4bd4d6821f3ca75ce4bd5ba1dd4feff08e0038e95af50ff2712cfa020\"" Jul 14 22:39:01.166377 containerd[1471]: time="2025-07-14T22:39:01.166337995Z" level=info msg="StartContainer for \"a9e3f2d4bd4d6821f3ca75ce4bd5ba1dd4feff08e0038e95af50ff2712cfa020\"" Jul 14 22:39:01.207763 systemd[1]: Started cri-containerd-a9e3f2d4bd4d6821f3ca75ce4bd5ba1dd4feff08e0038e95af50ff2712cfa020.scope - libcontainer container a9e3f2d4bd4d6821f3ca75ce4bd5ba1dd4feff08e0038e95af50ff2712cfa020. Jul 14 22:39:01.287754 containerd[1471]: time="2025-07-14T22:39:01.287678465Z" level=info msg="StartContainer for \"a9e3f2d4bd4d6821f3ca75ce4bd5ba1dd4feff08e0038e95af50ff2712cfa020\" returns successfully" Jul 14 22:39:01.649297 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:48196.service - OpenSSH per-connection server daemon (10.0.0.1:48196). Jul 14 22:39:01.715636 sshd[5267]: Accepted publickey for core from 10.0.0.1 port 48196 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:01.720257 sshd[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:01.730098 systemd-logind[1452]: New session 10 of user core. Jul 14 22:39:01.735738 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 22:39:01.878976 sshd[5267]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:01.884227 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:48196.service: Deactivated successfully. Jul 14 22:39:01.886831 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:39:01.887553 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:39:01.888570 systemd-logind[1452]: Removed session 10. Jul 14 22:39:03.256096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3458445691.mount: Deactivated successfully. Jul 14 22:39:03.823013 containerd[1471]: time="2025-07-14T22:39:03.822940225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:03.824008 containerd[1471]: time="2025-07-14T22:39:03.823957945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 14 22:39:03.825309 containerd[1471]: time="2025-07-14T22:39:03.825279610Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:03.888477 containerd[1471]: time="2025-07-14T22:39:03.888382224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:03.889439 containerd[1471]: time="2025-07-14T22:39:03.889392662Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 2.881700838s" Jul 14 22:39:03.889547 containerd[1471]: time="2025-07-14T22:39:03.889442887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 14 22:39:03.890920 containerd[1471]: time="2025-07-14T22:39:03.890820958Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 22:39:03.900652 containerd[1471]: time="2025-07-14T22:39:03.900593505Z" level=info msg="CreateContainer within sandbox \"13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 22:39:03.927078 containerd[1471]: time="2025-07-14T22:39:03.927014598Z" level=info msg="CreateContainer within sandbox \"13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"783084b34eba557c6e94e38779db394848b1edc7aa0b5e11d81dfa2239bfc9d7\"" Jul 14 22:39:03.927757 containerd[1471]: time="2025-07-14T22:39:03.927707226Z" level=info msg="StartContainer for \"783084b34eba557c6e94e38779db394848b1edc7aa0b5e11d81dfa2239bfc9d7\"" Jul 14 22:39:03.962606 systemd[1]: Started cri-containerd-783084b34eba557c6e94e38779db394848b1edc7aa0b5e11d81dfa2239bfc9d7.scope - libcontainer container 783084b34eba557c6e94e38779db394848b1edc7aa0b5e11d81dfa2239bfc9d7. Jul 14 22:39:04.507579 containerd[1471]: time="2025-07-14T22:39:04.507522865Z" level=info msg="StartContainer for \"783084b34eba557c6e94e38779db394848b1edc7aa0b5e11d81dfa2239bfc9d7\" returns successfully" Jul 14 22:39:06.278705 containerd[1471]: time="2025-07-14T22:39:06.278649200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:06.280145 containerd[1471]: time="2025-07-14T22:39:06.280077556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 14 22:39:06.281767 containerd[1471]: time="2025-07-14T22:39:06.281733772Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:06.284087 containerd[1471]: time="2025-07-14T22:39:06.284042108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:06.284697 containerd[1471]: time="2025-07-14T22:39:06.284665885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.39381456s" Jul 14 22:39:06.284741 containerd[1471]: time="2025-07-14T22:39:06.284696873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 14 22:39:06.285728 containerd[1471]: time="2025-07-14T22:39:06.285705086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:39:06.295066 containerd[1471]: time="2025-07-14T22:39:06.295007821Z" level=info msg="CreateContainer within sandbox \"f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 22:39:06.530422 containerd[1471]: time="2025-07-14T22:39:06.530134716Z" level=info msg="CreateContainer within sandbox \"f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"82be3fbdea406e66c962b1ec3330ba5af1e740c7cb876aec651468fb8588db35\"" Jul 14 22:39:06.534215 containerd[1471]: time="2025-07-14T22:39:06.533980814Z" level=info msg="StartContainer for \"82be3fbdea406e66c962b1ec3330ba5af1e740c7cb876aec651468fb8588db35\"" Jul 14 22:39:06.575783 systemd[1]: Started cri-containerd-82be3fbdea406e66c962b1ec3330ba5af1e740c7cb876aec651468fb8588db35.scope - libcontainer container 82be3fbdea406e66c962b1ec3330ba5af1e740c7cb876aec651468fb8588db35. Jul 14 22:39:06.774992 containerd[1471]: time="2025-07-14T22:39:06.774938582Z" level=info msg="StartContainer for \"82be3fbdea406e66c962b1ec3330ba5af1e740c7cb876aec651468fb8588db35\" returns successfully" Jul 14 22:39:06.891930 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:48202.service - OpenSSH per-connection server daemon (10.0.0.1:48202). Jul 14 22:39:06.949286 sshd[5428]: Accepted publickey for core from 10.0.0.1 port 48202 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:06.951432 sshd[5428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:06.956373 systemd-logind[1452]: New session 11 of user core. Jul 14 22:39:06.966718 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 22:39:07.140791 sshd[5428]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:07.146619 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:48202.service: Deactivated successfully. Jul 14 22:39:07.149392 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:39:07.150679 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:39:07.152274 systemd-logind[1452]: Removed session 11. Jul 14 22:39:07.328395 containerd[1471]: time="2025-07-14T22:39:07.328315473Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:07.329807 containerd[1471]: time="2025-07-14T22:39:07.329743698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 14 22:39:07.333354 containerd[1471]: time="2025-07-14T22:39:07.333313685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.047581858s" Jul 14 22:39:07.333490 containerd[1471]: time="2025-07-14T22:39:07.333355945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:39:07.334545 containerd[1471]: time="2025-07-14T22:39:07.334500374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 22:39:07.341364 containerd[1471]: time="2025-07-14T22:39:07.341320894Z" level=info msg="CreateContainer within sandbox \"c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:39:07.363370 containerd[1471]: time="2025-07-14T22:39:07.363317101Z" level=info msg="CreateContainer within sandbox \"c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e11cecbdfa763eb3028300bc44648ea959f2a27f3e8289f32e0ceaaf93db73c5\"" Jul 14 22:39:07.364114 containerd[1471]: time="2025-07-14T22:39:07.363873631Z" level=info msg="StartContainer for \"e11cecbdfa763eb3028300bc44648ea959f2a27f3e8289f32e0ceaaf93db73c5\"" Jul 14 22:39:07.393726 systemd[1]: Started cri-containerd-e11cecbdfa763eb3028300bc44648ea959f2a27f3e8289f32e0ceaaf93db73c5.scope - libcontainer container e11cecbdfa763eb3028300bc44648ea959f2a27f3e8289f32e0ceaaf93db73c5. Jul 14 22:39:07.441148 containerd[1471]: time="2025-07-14T22:39:07.441002681Z" level=info msg="StartContainer for \"e11cecbdfa763eb3028300bc44648ea959f2a27f3e8289f32e0ceaaf93db73c5\" returns successfully" Jul 14 22:39:07.522486 kubelet[2604]: E0714 22:39:07.521785 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:39:07.621410 kubelet[2604]: I0714 22:39:07.620137 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d69cdc74-2pbv4" podStartSLOduration=49.52690044 podStartE2EDuration="1m3.620116894s" podCreationTimestamp="2025-07-14 22:38:04 +0000 UTC" firstStartedPulling="2025-07-14 22:38:53.240857987 +0000 UTC m=+72.803162623" lastFinishedPulling="2025-07-14 22:39:07.334074441 +0000 UTC m=+86.896379077" observedRunningTime="2025-07-14 22:39:07.619225482 +0000 UTC m=+87.181530128" watchObservedRunningTime="2025-07-14 22:39:07.620116894 +0000 UTC m=+87.182421530" Jul 14 22:39:07.621410 kubelet[2604]: I0714 22:39:07.620439 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-4z895" podStartSLOduration=48.780814785 podStartE2EDuration="1m0.620423212s" podCreationTimestamp="2025-07-14 22:38:07 +0000 UTC" firstStartedPulling="2025-07-14 22:38:52.051079991 +0000 UTC m=+71.613384627" lastFinishedPulling="2025-07-14 22:39:03.890688418 +0000 UTC m=+83.452993054" observedRunningTime="2025-07-14 22:39:05.591404737 +0000 UTC m=+85.153709373" watchObservedRunningTime="2025-07-14 22:39:07.620423212 +0000 UTC m=+87.182727848" Jul 14 22:39:08.522745 kubelet[2604]: I0714 22:39:08.522696 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:39:12.163004 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:39280.service - OpenSSH per-connection server daemon (10.0.0.1:39280). Jul 14 22:39:12.254832 sshd[5540]: Accepted publickey for core from 10.0.0.1 port 39280 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:12.257469 sshd[5540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:12.267186 systemd-logind[1452]: New session 12 of user core. Jul 14 22:39:12.276705 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 22:39:14.893398 sshd[5540]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:14.897665 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:39280.service: Deactivated successfully. Jul 14 22:39:14.899958 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:39:14.900639 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:39:14.901597 systemd-logind[1452]: Removed session 12. Jul 14 22:39:15.149422 containerd[1471]: time="2025-07-14T22:39:15.149244087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:15.270828 containerd[1471]: time="2025-07-14T22:39:15.270755523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 14 22:39:15.410297 containerd[1471]: time="2025-07-14T22:39:15.410142059Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:15.652317 containerd[1471]: time="2025-07-14T22:39:15.652225479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:15.653406 containerd[1471]: time="2025-07-14T22:39:15.653351472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 8.318815081s" Jul 14 22:39:15.653406 containerd[1471]: time="2025-07-14T22:39:15.653396508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 14 22:39:15.654534 containerd[1471]: time="2025-07-14T22:39:15.654496672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 22:39:15.947161 containerd[1471]: time="2025-07-14T22:39:15.947119203Z" level=info msg="CreateContainer within sandbox \"2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 22:39:17.085467 containerd[1471]: time="2025-07-14T22:39:17.085393535Z" level=info msg="CreateContainer within sandbox \"2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5b313850b86ced71d87dd79ead57a0b51c3d1010ebffe8dfa940292a93f40c28\"" Jul 14 22:39:17.087367 containerd[1471]: time="2025-07-14T22:39:17.085992675Z" level=info msg="StartContainer for \"5b313850b86ced71d87dd79ead57a0b51c3d1010ebffe8dfa940292a93f40c28\"" Jul 14 22:39:17.142606 systemd[1]: Started cri-containerd-5b313850b86ced71d87dd79ead57a0b51c3d1010ebffe8dfa940292a93f40c28.scope - libcontainer container 5b313850b86ced71d87dd79ead57a0b51c3d1010ebffe8dfa940292a93f40c28. Jul 14 22:39:17.237819 containerd[1471]: time="2025-07-14T22:39:17.237743874Z" level=info msg="StartContainer for \"5b313850b86ced71d87dd79ead57a0b51c3d1010ebffe8dfa940292a93f40c28\" returns successfully" Jul 14 22:39:17.673158 kubelet[2604]: I0714 22:39:17.672999 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-798dc4cdb-fh8g7" podStartSLOduration=48.31340747 podStartE2EDuration="1m10.672974967s" podCreationTimestamp="2025-07-14 22:38:07 +0000 UTC" firstStartedPulling="2025-07-14 22:38:53.29469879 +0000 UTC m=+72.857003426" lastFinishedPulling="2025-07-14 22:39:15.654266287 +0000 UTC m=+95.216570923" observedRunningTime="2025-07-14 22:39:17.672104186 +0000 UTC m=+97.234408822" watchObservedRunningTime="2025-07-14 22:39:17.672974967 +0000 UTC m=+97.235279603" Jul 14 22:39:18.521819 kubelet[2604]: E0714 22:39:18.521761 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:39:18.862305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634476547.mount: Deactivated successfully. Jul 14 22:39:19.905270 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:57882.service - OpenSSH per-connection server daemon (10.0.0.1:57882). Jul 14 22:39:20.016157 sshd[5639]: Accepted publickey for core from 10.0.0.1 port 57882 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:20.017982 sshd[5639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:20.022051 systemd-logind[1452]: New session 13 of user core. Jul 14 22:39:20.028596 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 22:39:20.105939 containerd[1471]: time="2025-07-14T22:39:20.105866060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:20.146071 containerd[1471]: time="2025-07-14T22:39:20.145970351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 14 22:39:20.174130 containerd[1471]: time="2025-07-14T22:39:20.173950231Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:20.203692 containerd[1471]: time="2025-07-14T22:39:20.203632913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:20.204372 containerd[1471]: time="2025-07-14T22:39:20.204339655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.549805352s" Jul 14 22:39:20.204372 containerd[1471]: time="2025-07-14T22:39:20.204371626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 14 22:39:20.205958 containerd[1471]: time="2025-07-14T22:39:20.205926256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 22:39:20.291219 containerd[1471]: time="2025-07-14T22:39:20.291138259Z" level=info msg="CreateContainer within sandbox \"6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 22:39:20.302618 sshd[5639]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:20.312329 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:57882.service: Deactivated successfully. Jul 14 22:39:20.314219 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:39:20.315652 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:39:20.327808 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:57886.service - OpenSSH per-connection server daemon (10.0.0.1:57886). Jul 14 22:39:20.328853 systemd-logind[1452]: Removed session 13. Jul 14 22:39:20.364088 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 57886 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:20.365701 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:20.369757 systemd-logind[1452]: New session 14 of user core. Jul 14 22:39:20.377614 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 22:39:20.714544 sshd[5654]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:20.722639 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:57886.service: Deactivated successfully. Jul 14 22:39:20.724782 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:39:20.726372 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:39:20.728899 containerd[1471]: time="2025-07-14T22:39:20.728852223Z" level=info msg="CreateContainer within sandbox \"6d4230c09b3512cf7b9e6dd4932ac2c13c542b40b3f135097710c1aa5b3a6317\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"1d7db1ad709fd50c79c45379e3e1c2c314c4ee96206ec3752affdd0109041abb\"" Jul 14 22:39:20.729611 containerd[1471]: time="2025-07-14T22:39:20.729590886Z" level=info msg="StartContainer for \"1d7db1ad709fd50c79c45379e3e1c2c314c4ee96206ec3752affdd0109041abb\"" Jul 14 22:39:20.731820 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:57902.service - OpenSSH per-connection server daemon (10.0.0.1:57902). Jul 14 22:39:20.732999 systemd-logind[1452]: Removed session 14. Jul 14 22:39:20.771256 systemd[1]: Started cri-containerd-1d7db1ad709fd50c79c45379e3e1c2c314c4ee96206ec3752affdd0109041abb.scope - libcontainer container 1d7db1ad709fd50c79c45379e3e1c2c314c4ee96206ec3752affdd0109041abb. Jul 14 22:39:20.795548 sshd[5666]: Accepted publickey for core from 10.0.0.1 port 57902 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:20.797497 sshd[5666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:20.803209 systemd-logind[1452]: New session 15 of user core. Jul 14 22:39:20.808635 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 22:39:20.936531 containerd[1471]: time="2025-07-14T22:39:20.936415597Z" level=info msg="StartContainer for \"1d7db1ad709fd50c79c45379e3e1c2c314c4ee96206ec3752affdd0109041abb\" returns successfully" Jul 14 22:39:21.055314 sshd[5666]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:21.061821 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:57902.service: Deactivated successfully. Jul 14 22:39:21.064314 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:39:21.065288 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:39:21.066784 systemd-logind[1452]: Removed session 15. Jul 14 22:39:22.853830 containerd[1471]: time="2025-07-14T22:39:22.853733719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 14 22:39:22.860070 containerd[1471]: time="2025-07-14T22:39:22.860013417Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.654038328s" Jul 14 22:39:22.860070 containerd[1471]: time="2025-07-14T22:39:22.860050628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 14 22:39:22.865901 containerd[1471]: time="2025-07-14T22:39:22.865865810Z" level=info msg="CreateContainer within sandbox \"f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 22:39:22.871872 containerd[1471]: time="2025-07-14T22:39:22.871789106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:22.872765 containerd[1471]: time="2025-07-14T22:39:22.872729930Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:22.873459 containerd[1471]: time="2025-07-14T22:39:22.873413388Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:39:22.906634 containerd[1471]: time="2025-07-14T22:39:22.906579321Z" level=info msg="CreateContainer within sandbox \"f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c71d8fb1f6a76b6c32a50b00c9430f421e54a3fe2671ae06cf524cebd8281ca6\"" Jul 14 22:39:22.907180 containerd[1471]: time="2025-07-14T22:39:22.907123245Z" level=info msg="StartContainer for \"c71d8fb1f6a76b6c32a50b00c9430f421e54a3fe2671ae06cf524cebd8281ca6\"" Jul 14 22:39:22.957632 systemd[1]: Started cri-containerd-c71d8fb1f6a76b6c32a50b00c9430f421e54a3fe2671ae06cf524cebd8281ca6.scope - libcontainer container c71d8fb1f6a76b6c32a50b00c9430f421e54a3fe2671ae06cf524cebd8281ca6. Jul 14 22:39:22.990941 containerd[1471]: time="2025-07-14T22:39:22.990888073Z" level=info msg="StartContainer for \"c71d8fb1f6a76b6c32a50b00c9430f421e54a3fe2671ae06cf524cebd8281ca6\" returns successfully" Jul 14 22:39:23.672753 kubelet[2604]: I0714 22:39:23.672703 2604 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 22:39:23.674193 kubelet[2604]: I0714 22:39:23.674166 2604 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 22:39:23.721160 kubelet[2604]: I0714 22:39:23.720837 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-rnmm5" podStartSLOduration=45.991773748 podStartE2EDuration="1m16.720821122s" podCreationTimestamp="2025-07-14 22:38:07 +0000 UTC" firstStartedPulling="2025-07-14 22:38:52.131608354 +0000 UTC m=+71.693912990" lastFinishedPulling="2025-07-14 22:39:22.860655718 +0000 UTC m=+102.422960364" observedRunningTime="2025-07-14 22:39:23.720769095 +0000 UTC m=+103.283073721" watchObservedRunningTime="2025-07-14 22:39:23.720821122 +0000 UTC m=+103.283125758" Jul 14 22:39:23.721160 kubelet[2604]: I0714 22:39:23.720930 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6c9bdc6668-k8xfc" podStartSLOduration=4.615280743 podStartE2EDuration="33.72092615s" podCreationTimestamp="2025-07-14 22:38:50 +0000 UTC" firstStartedPulling="2025-07-14 22:38:51.099586391 +0000 UTC m=+70.661891027" lastFinishedPulling="2025-07-14 22:39:20.205231797 +0000 UTC m=+99.767536434" observedRunningTime="2025-07-14 22:39:21.756674798 +0000 UTC m=+101.318979444" watchObservedRunningTime="2025-07-14 22:39:23.72092615 +0000 UTC m=+103.283230786" Jul 14 22:39:26.066988 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:57908.service - OpenSSH per-connection server daemon (10.0.0.1:57908). Jul 14 22:39:26.123766 sshd[5786]: Accepted publickey for core from 10.0.0.1 port 57908 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:26.125848 sshd[5786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:26.130284 systemd-logind[1452]: New session 16 of user core. Jul 14 22:39:26.140624 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 22:39:26.594387 sshd[5786]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:26.601412 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:57908.service: Deactivated successfully. Jul 14 22:39:26.603805 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:39:26.604578 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:39:26.605531 systemd-logind[1452]: Removed session 16. Jul 14 22:39:31.209027 kubelet[2604]: I0714 22:39:31.208960 2604 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:39:31.606837 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:57198.service - OpenSSH per-connection server daemon (10.0.0.1:57198). Jul 14 22:39:31.646969 sshd[5807]: Accepted publickey for core from 10.0.0.1 port 57198 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:31.648687 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:31.652882 systemd-logind[1452]: New session 17 of user core. Jul 14 22:39:31.659618 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 22:39:32.049496 sshd[5807]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:32.054056 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:57198.service: Deactivated successfully. Jul 14 22:39:32.056437 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:39:32.057150 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:39:32.058093 systemd-logind[1452]: Removed session 17. Jul 14 22:39:37.061986 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:57202.service - OpenSSH per-connection server daemon (10.0.0.1:57202). Jul 14 22:39:37.119328 sshd[5830]: Accepted publickey for core from 10.0.0.1 port 57202 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:37.121610 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:37.126425 systemd-logind[1452]: New session 18 of user core. Jul 14 22:39:37.139629 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 22:39:37.263204 sshd[5830]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:37.267134 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:57202.service: Deactivated successfully. Jul 14 22:39:37.270572 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:39:37.272196 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:39:37.273474 systemd-logind[1452]: Removed session 18. Jul 14 22:39:41.522415 kubelet[2604]: E0714 22:39:41.522373 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:39:42.276749 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:47028.service - OpenSSH per-connection server daemon (10.0.0.1:47028). Jul 14 22:39:42.321081 sshd[5870]: Accepted publickey for core from 10.0.0.1 port 47028 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:42.323050 sshd[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:42.327554 systemd-logind[1452]: New session 19 of user core. Jul 14 22:39:42.337722 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 22:39:42.460104 sshd[5870]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:42.463898 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:47028.service: Deactivated successfully. Jul 14 22:39:42.465841 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:39:42.466387 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:39:42.467276 systemd-logind[1452]: Removed session 19. Jul 14 22:39:42.530219 containerd[1471]: time="2025-07-14T22:39:42.530031676Z" level=info msg="StopPodSandbox for \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\"" Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:42.961 [WARNING][5894] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"23ede824-2f3c-4c4c-b760-db02257f0bab", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f", Pod:"coredns-674b8bbfcf-9lnpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab2774d67e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:42.962 [INFO][5894] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:42.962 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" iface="eth0" netns="" Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:42.962 [INFO][5894] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:42.962 [INFO][5894] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:43.044 [INFO][5903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" HandleID="k8s-pod-network.8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:43.044 [INFO][5903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:43.044 [INFO][5903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:43.050 [WARNING][5903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" HandleID="k8s-pod-network.8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:43.050 [INFO][5903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" HandleID="k8s-pod-network.8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:43.051 [INFO][5903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:43.059806 containerd[1471]: 2025-07-14 22:39:43.056 [INFO][5894] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:39:43.067530 containerd[1471]: time="2025-07-14T22:39:43.067466714Z" level=info msg="TearDown network for sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\" successfully" Jul 14 22:39:43.067530 containerd[1471]: time="2025-07-14T22:39:43.067510105Z" level=info msg="StopPodSandbox for \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\" returns successfully" Jul 14 22:39:43.097492 containerd[1471]: time="2025-07-14T22:39:43.097382084Z" level=info msg="RemovePodSandbox for \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\"" Jul 14 22:39:43.099616 containerd[1471]: time="2025-07-14T22:39:43.099584543Z" level=info msg="Forcibly stopping sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\"" Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.322 [WARNING][5921] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"23ede824-2f3c-4c4c-b760-db02257f0bab", ResourceVersion:"1121", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"86fc1c17ff8ce69b90987af2e2871a068167e9ae7b70edbfa5fe9345245d344f", Pod:"coredns-674b8bbfcf-9lnpr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliab2774d67e4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.323 [INFO][5921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.323 [INFO][5921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" iface="eth0" netns="" Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.323 [INFO][5921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.323 [INFO][5921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.346 [INFO][5930] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" HandleID="k8s-pod-network.8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.346 [INFO][5930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.346 [INFO][5930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.352 [WARNING][5930] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" HandleID="k8s-pod-network.8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.353 [INFO][5930] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" HandleID="k8s-pod-network.8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Workload="localhost-k8s-coredns--674b8bbfcf--9lnpr-eth0" Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.354 [INFO][5930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:43.361203 containerd[1471]: 2025-07-14 22:39:43.357 [INFO][5921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a" Jul 14 22:39:43.361203 containerd[1471]: time="2025-07-14T22:39:43.361144583Z" level=info msg="TearDown network for sandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\" successfully" Jul 14 22:39:43.529756 containerd[1471]: time="2025-07-14T22:39:43.529685817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:39:43.538694 containerd[1471]: time="2025-07-14T22:39:43.538653619Z" level=info msg="RemovePodSandbox \"8b8287c1c6c5d86fc9b919a03891b96b9e2cd84d42f1ea88c4d4681370055b2a\" returns successfully" Jul 14 22:39:43.545401 containerd[1471]: time="2025-07-14T22:39:43.545368418Z" level=info msg="StopPodSandbox for \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\"" Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.670 [WARNING][5948] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" WorkloadEndpoint="localhost-k8s-whisker--845d86f57b--r96bf-eth0" Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.670 [INFO][5948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.670 [INFO][5948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" iface="eth0" netns="" Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.670 [INFO][5948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.670 [INFO][5948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.690 [INFO][5956] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" HandleID="k8s-pod-network.e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Workload="localhost-k8s-whisker--845d86f57b--r96bf-eth0" Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.691 [INFO][5956] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.691 [INFO][5956] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.697 [WARNING][5956] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" HandleID="k8s-pod-network.e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Workload="localhost-k8s-whisker--845d86f57b--r96bf-eth0" Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.697 [INFO][5956] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" HandleID="k8s-pod-network.e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Workload="localhost-k8s-whisker--845d86f57b--r96bf-eth0" Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.698 [INFO][5956] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:43.704213 containerd[1471]: 2025-07-14 22:39:43.701 [INFO][5948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:39:43.704213 containerd[1471]: time="2025-07-14T22:39:43.704193349Z" level=info msg="TearDown network for sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\" successfully" Jul 14 22:39:43.705018 containerd[1471]: time="2025-07-14T22:39:43.704222925Z" level=info msg="StopPodSandbox for \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\" returns successfully" Jul 14 22:39:43.705018 containerd[1471]: time="2025-07-14T22:39:43.704812025Z" level=info msg="RemovePodSandbox for \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\"" Jul 14 22:39:43.705018 containerd[1471]: time="2025-07-14T22:39:43.704841961Z" level=info msg="Forcibly stopping sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\"" Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.737 [WARNING][5974] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" WorkloadEndpoint="localhost-k8s-whisker--845d86f57b--r96bf-eth0" Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.737 [INFO][5974] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.737 [INFO][5974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" iface="eth0" netns="" Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.737 [INFO][5974] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.737 [INFO][5974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.757 [INFO][5983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" HandleID="k8s-pod-network.e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Workload="localhost-k8s-whisker--845d86f57b--r96bf-eth0" Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.758 [INFO][5983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.758 [INFO][5983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.763 [WARNING][5983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" HandleID="k8s-pod-network.e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Workload="localhost-k8s-whisker--845d86f57b--r96bf-eth0" Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.763 [INFO][5983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" HandleID="k8s-pod-network.e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Workload="localhost-k8s-whisker--845d86f57b--r96bf-eth0" Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.764 [INFO][5983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:43.769705 containerd[1471]: 2025-07-14 22:39:43.766 [INFO][5974] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52" Jul 14 22:39:43.770085 containerd[1471]: time="2025-07-14T22:39:43.769760145Z" level=info msg="TearDown network for sandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\" successfully" Jul 14 22:39:44.016077 containerd[1471]: time="2025-07-14T22:39:44.016023690Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:39:44.016194 containerd[1471]: time="2025-07-14T22:39:44.016101357Z" level=info msg="RemovePodSandbox \"e8024dee3a9d3459ae64cb9e46059d62fa99a6010ba4f1894b07557cbaaa9f52\" returns successfully" Jul 14 22:39:44.016564 containerd[1471]: time="2025-07-14T22:39:44.016520136Z" level=info msg="StopPodSandbox for \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\"" Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.057 [WARNING][6000] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rnmm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12e54208-f5d7-4225-a878-cbfd7ce81981", ResourceVersion:"1346", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413", Pod:"csi-node-driver-rnmm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie30dbbec646", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.057 [INFO][6000] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.057 [INFO][6000] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" iface="eth0" netns="" Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.058 [INFO][6000] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.058 [INFO][6000] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.076 [INFO][6009] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" HandleID="k8s-pod-network.fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.076 [INFO][6009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.076 [INFO][6009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.082 [WARNING][6009] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" HandleID="k8s-pod-network.fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.082 [INFO][6009] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" HandleID="k8s-pod-network.fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.083 [INFO][6009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:44.090579 containerd[1471]: 2025-07-14 22:39:44.086 [INFO][6000] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:39:44.091293 containerd[1471]: time="2025-07-14T22:39:44.090628888Z" level=info msg="TearDown network for sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\" successfully" Jul 14 22:39:44.091293 containerd[1471]: time="2025-07-14T22:39:44.090661078Z" level=info msg="StopPodSandbox for \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\" returns successfully" Jul 14 22:39:44.091293 containerd[1471]: time="2025-07-14T22:39:44.091125122Z" level=info msg="RemovePodSandbox for \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\"" Jul 14 22:39:44.091293 containerd[1471]: time="2025-07-14T22:39:44.091154698Z" level=info msg="Forcibly stopping sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\"" Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.144 [WARNING][6026] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rnmm5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"12e54208-f5d7-4225-a878-cbfd7ce81981", ResourceVersion:"1346", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f653e17906d8426c080dc884ae3871e80485c752e9d08b0ba1f13423e42f6413", Pod:"csi-node-driver-rnmm5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie30dbbec646", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.144 [INFO][6026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.144 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" iface="eth0" netns="" Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.144 [INFO][6026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.144 [INFO][6026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.165 [INFO][6035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" HandleID="k8s-pod-network.fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.165 [INFO][6035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.165 [INFO][6035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.170 [WARNING][6035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" HandleID="k8s-pod-network.fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.170 [INFO][6035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" HandleID="k8s-pod-network.fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Workload="localhost-k8s-csi--node--driver--rnmm5-eth0" Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.171 [INFO][6035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:44.176563 containerd[1471]: 2025-07-14 22:39:44.173 [INFO][6026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5" Jul 14 22:39:44.177119 containerd[1471]: time="2025-07-14T22:39:44.176614812Z" level=info msg="TearDown network for sandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\" successfully" Jul 14 22:39:44.343141 containerd[1471]: time="2025-07-14T22:39:44.342985940Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:39:44.343141 containerd[1471]: time="2025-07-14T22:39:44.343073335Z" level=info msg="RemovePodSandbox \"fb5060fced2526d90d6b7d10e6460dd3c1ef2e36ca5aa207c39e88ff53bd9be5\" returns successfully" Jul 14 22:39:44.343762 containerd[1471]: time="2025-07-14T22:39:44.343698252Z" level=info msg="StopPodSandbox for \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\"" Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.378 [WARNING][6053] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0", GenerateName:"calico-apiserver-d69cdc74-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb8d9286-50a6-4899-a9a8-90e0bbd55a23", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d69cdc74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d", Pod:"calico-apiserver-d69cdc74-djhwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidab2dcf2b55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.378 [INFO][6053] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.378 [INFO][6053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" iface="eth0" netns="" Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.378 [INFO][6053] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.378 [INFO][6053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.398 [INFO][6062] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" HandleID="k8s-pod-network.91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.398 [INFO][6062] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.398 [INFO][6062] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.404 [WARNING][6062] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" HandleID="k8s-pod-network.91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.404 [INFO][6062] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" HandleID="k8s-pod-network.91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.406 [INFO][6062] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:44.412147 containerd[1471]: 2025-07-14 22:39:44.409 [INFO][6053] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:39:44.412761 containerd[1471]: time="2025-07-14T22:39:44.412186307Z" level=info msg="TearDown network for sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\" successfully" Jul 14 22:39:44.412761 containerd[1471]: time="2025-07-14T22:39:44.412214700Z" level=info msg="StopPodSandbox for \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\" returns successfully" Jul 14 22:39:44.412761 containerd[1471]: time="2025-07-14T22:39:44.412704362Z" level=info msg="RemovePodSandbox for \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\"" Jul 14 22:39:44.412761 containerd[1471]: time="2025-07-14T22:39:44.412734629Z" level=info msg="Forcibly stopping sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\"" Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.449 [WARNING][6080] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0", GenerateName:"calico-apiserver-d69cdc74-", Namespace:"calico-apiserver", SelfLink:"", UID:"eb8d9286-50a6-4899-a9a8-90e0bbd55a23", ResourceVersion:"1190", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d69cdc74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"121b17c6a2ee34a8bfa6e9a996895931e12c265fd50eba362312f3e910774d8d", Pod:"calico-apiserver-d69cdc74-djhwk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidab2dcf2b55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.449 [INFO][6080] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.449 [INFO][6080] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" iface="eth0" netns="" Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.449 [INFO][6080] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.449 [INFO][6080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.469 [INFO][6089] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" HandleID="k8s-pod-network.91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.469 [INFO][6089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.469 [INFO][6089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.475 [WARNING][6089] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" HandleID="k8s-pod-network.91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.476 [INFO][6089] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" HandleID="k8s-pod-network.91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Workload="localhost-k8s-calico--apiserver--d69cdc74--djhwk-eth0" Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.477 [INFO][6089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:44.483365 containerd[1471]: 2025-07-14 22:39:44.479 [INFO][6080] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709" Jul 14 22:39:44.483913 containerd[1471]: time="2025-07-14T22:39:44.483384817Z" level=info msg="TearDown network for sandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\" successfully" Jul 14 22:39:44.639498 containerd[1471]: time="2025-07-14T22:39:44.638763790Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:39:44.639498 containerd[1471]: time="2025-07-14T22:39:44.638814626Z" level=info msg="RemovePodSandbox \"91f824f57560f9ca8b9dbe237816827e23b9c503dd50ac8ade169ca9a79bc709\" returns successfully" Jul 14 22:39:44.639498 containerd[1471]: time="2025-07-14T22:39:44.639028199Z" level=info msg="StopPodSandbox for \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\"" Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.672 [WARNING][6107] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0", GenerateName:"calico-apiserver-d69cdc74-", Namespace:"calico-apiserver", SelfLink:"", UID:"61b0e57f-63f8-4046-911a-210a3070cdd1", ResourceVersion:"1383", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d69cdc74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f", Pod:"calico-apiserver-d69cdc74-2pbv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif995be761ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.672 [INFO][6107] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.672 [INFO][6107] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" iface="eth0" netns="" Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.672 [INFO][6107] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.672 [INFO][6107] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.690 [INFO][6116] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" HandleID="k8s-pod-network.bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.690 [INFO][6116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.691 [INFO][6116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.696 [WARNING][6116] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" HandleID="k8s-pod-network.bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.696 [INFO][6116] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" HandleID="k8s-pod-network.bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.698 [INFO][6116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:44.703438 containerd[1471]: 2025-07-14 22:39:44.700 [INFO][6107] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:39:44.704014 containerd[1471]: time="2025-07-14T22:39:44.703492825Z" level=info msg="TearDown network for sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\" successfully" Jul 14 22:39:44.704014 containerd[1471]: time="2025-07-14T22:39:44.703518383Z" level=info msg="StopPodSandbox for \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\" returns successfully" Jul 14 22:39:44.704014 containerd[1471]: time="2025-07-14T22:39:44.703999439Z" level=info msg="RemovePodSandbox for \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\"" Jul 14 22:39:44.704082 containerd[1471]: time="2025-07-14T22:39:44.704022914Z" level=info msg="Forcibly stopping sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\"" Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.764 [WARNING][6133] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0", GenerateName:"calico-apiserver-d69cdc74-", Namespace:"calico-apiserver", SelfLink:"", UID:"61b0e57f-63f8-4046-911a-210a3070cdd1", ResourceVersion:"1383", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d69cdc74", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c1ca1c160969dbca6082b6a0c28b1e6235b27ee77de75571e25b7eb3709fdf1f", Pod:"calico-apiserver-d69cdc74-2pbv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif995be761ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.764 [INFO][6133] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.764 [INFO][6133] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" iface="eth0" netns="" Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.764 [INFO][6133] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.764 [INFO][6133] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.787 [INFO][6141] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" HandleID="k8s-pod-network.bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.787 [INFO][6141] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.787 [INFO][6141] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.793 [WARNING][6141] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" HandleID="k8s-pod-network.bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.793 [INFO][6141] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" HandleID="k8s-pod-network.bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Workload="localhost-k8s-calico--apiserver--d69cdc74--2pbv4-eth0" Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.794 [INFO][6141] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:44.800397 containerd[1471]: 2025-07-14 22:39:44.797 [INFO][6133] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e" Jul 14 22:39:44.800924 containerd[1471]: time="2025-07-14T22:39:44.800470573Z" level=info msg="TearDown network for sandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\" successfully" Jul 14 22:39:44.879617 containerd[1471]: time="2025-07-14T22:39:44.879545359Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:39:44.879617 containerd[1471]: time="2025-07-14T22:39:44.879624267Z" level=info msg="RemovePodSandbox \"bf3d36df22769c5b4c7f905b6b11495b4f2a0f4fc61e840af46e38c2ca83ff4e\" returns successfully" Jul 14 22:39:44.880233 containerd[1471]: time="2025-07-14T22:39:44.880177510Z" level=info msg="StopPodSandbox for \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\"" Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.917 [WARNING][6159] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"29dad085-0fbc-4a72-8160-b942ebda8dbc", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a", Pod:"coredns-674b8bbfcf-zhgwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie57ed54e5e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.917 [INFO][6159] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.917 [INFO][6159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" iface="eth0" netns="" Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.917 [INFO][6159] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.917 [INFO][6159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.938 [INFO][6167] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" HandleID="k8s-pod-network.043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.938 [INFO][6167] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.938 [INFO][6167] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.944 [WARNING][6167] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" HandleID="k8s-pod-network.043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.944 [INFO][6167] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" HandleID="k8s-pod-network.043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.945 [INFO][6167] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:44.951167 containerd[1471]: 2025-07-14 22:39:44.948 [INFO][6159] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:39:44.951167 containerd[1471]: time="2025-07-14T22:39:44.951144895Z" level=info msg="TearDown network for sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\" successfully" Jul 14 22:39:44.951594 containerd[1471]: time="2025-07-14T22:39:44.951172637Z" level=info msg="StopPodSandbox for \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\" returns successfully" Jul 14 22:39:44.951687 containerd[1471]: time="2025-07-14T22:39:44.951663853Z" level=info msg="RemovePodSandbox for \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\"" Jul 14 22:39:44.951722 containerd[1471]: time="2025-07-14T22:39:44.951691044Z" level=info msg="Forcibly stopping sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\"" Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:44.986 [WARNING][6186] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"29dad085-0fbc-4a72-8160-b942ebda8dbc", ResourceVersion:"1154", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0ab6ac1ec5cdc013d700875929be84d735e32040a1c78fad23b61b12baf7702a", Pod:"coredns-674b8bbfcf-zhgwx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie57ed54e5e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:44.986 [INFO][6186] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:44.986 [INFO][6186] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" iface="eth0" netns="" Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:44.986 [INFO][6186] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:44.986 [INFO][6186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:45.008 [INFO][6195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" HandleID="k8s-pod-network.043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:45.008 [INFO][6195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:45.008 [INFO][6195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:45.015 [WARNING][6195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" HandleID="k8s-pod-network.043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:45.015 [INFO][6195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" HandleID="k8s-pod-network.043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Workload="localhost-k8s-coredns--674b8bbfcf--zhgwx-eth0" Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:45.017 [INFO][6195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:45.023417 containerd[1471]: 2025-07-14 22:39:45.020 [INFO][6186] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc" Jul 14 22:39:45.024014 containerd[1471]: time="2025-07-14T22:39:45.023481798Z" level=info msg="TearDown network for sandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\" successfully" Jul 14 22:39:45.028114 containerd[1471]: time="2025-07-14T22:39:45.028085349Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:39:45.028165 containerd[1471]: time="2025-07-14T22:39:45.028147716Z" level=info msg="RemovePodSandbox \"043468a21b18fa870e0dc2d3aec50143767465876525898c7009b09f14ac05fc\" returns successfully" Jul 14 22:39:45.028898 containerd[1471]: time="2025-07-14T22:39:45.028589058Z" level=info msg="StopPodSandbox for \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\"" Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.064 [WARNING][6214] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0", GenerateName:"calico-kube-controllers-798dc4cdb-", Namespace:"calico-system", SelfLink:"", UID:"121a6ca1-b03e-4bca-84d8-4cf70c6b267d", ResourceVersion:"1292", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798dc4cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303", Pod:"calico-kube-controllers-798dc4cdb-fh8g7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b132486e0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.065 [INFO][6214] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.065 [INFO][6214] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" iface="eth0" netns="" Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.065 [INFO][6214] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.065 [INFO][6214] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.090 [INFO][6223] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" HandleID="k8s-pod-network.abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.090 [INFO][6223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.090 [INFO][6223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.097 [WARNING][6223] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" HandleID="k8s-pod-network.abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.097 [INFO][6223] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" HandleID="k8s-pod-network.abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.098 [INFO][6223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:45.104930 containerd[1471]: 2025-07-14 22:39:45.102 [INFO][6214] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:39:45.105410 containerd[1471]: time="2025-07-14T22:39:45.104978963Z" level=info msg="TearDown network for sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\" successfully" Jul 14 22:39:45.105410 containerd[1471]: time="2025-07-14T22:39:45.105014911Z" level=info msg="StopPodSandbox for \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\" returns successfully" Jul 14 22:39:45.105528 containerd[1471]: time="2025-07-14T22:39:45.105494254Z" level=info msg="RemovePodSandbox for \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\"" Jul 14 22:39:45.105528 containerd[1471]: time="2025-07-14T22:39:45.105516125Z" level=info msg="Forcibly stopping sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\"" Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.140 [WARNING][6240] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0", GenerateName:"calico-kube-controllers-798dc4cdb-", Namespace:"calico-system", SelfLink:"", UID:"121a6ca1-b03e-4bca-84d8-4cf70c6b267d", ResourceVersion:"1292", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"798dc4cdb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f32b5717645fc30e99d9dcdb91476249ec584d0ecfb4545442e083357d86303", Pod:"calico-kube-controllers-798dc4cdb-fh8g7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6b132486e0a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.141 [INFO][6240] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.141 [INFO][6240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" iface="eth0" netns="" Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.141 [INFO][6240] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.141 [INFO][6240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.160 [INFO][6249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" HandleID="k8s-pod-network.abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.161 [INFO][6249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.161 [INFO][6249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.166 [WARNING][6249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" HandleID="k8s-pod-network.abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.167 [INFO][6249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" HandleID="k8s-pod-network.abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Workload="localhost-k8s-calico--kube--controllers--798dc4cdb--fh8g7-eth0" Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.168 [INFO][6249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:45.175488 containerd[1471]: 2025-07-14 22:39:45.172 [INFO][6240] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59" Jul 14 22:39:45.176053 containerd[1471]: time="2025-07-14T22:39:45.175524521Z" level=info msg="TearDown network for sandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\" successfully" Jul 14 22:39:45.179756 containerd[1471]: time="2025-07-14T22:39:45.179723920Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:39:45.179831 containerd[1471]: time="2025-07-14T22:39:45.179785006Z" level=info msg="RemovePodSandbox \"abeb19c81820dd5c482ff4fba192eeccd3390386be02254c4a4c6a4df6b90a59\" returns successfully" Jul 14 22:39:45.180264 containerd[1471]: time="2025-07-14T22:39:45.180238399Z" level=info msg="StopPodSandbox for \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\"" Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.215 [WARNING][6266] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--4z895-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b8c6f0a5-a93f-42f8-a410-795cf33b659f", ResourceVersion:"1417", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26", Pod:"goldmane-768f4c5c69-4z895", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali10c398e07ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.215 [INFO][6266] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.216 [INFO][6266] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" iface="eth0" netns="" Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.216 [INFO][6266] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.216 [INFO][6266] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.237 [INFO][6275] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" HandleID="k8s-pod-network.53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.237 [INFO][6275] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.237 [INFO][6275] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.245 [WARNING][6275] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" HandleID="k8s-pod-network.53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.245 [INFO][6275] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" HandleID="k8s-pod-network.53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.246 [INFO][6275] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:45.252581 containerd[1471]: 2025-07-14 22:39:45.249 [INFO][6266] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:39:45.252581 containerd[1471]: time="2025-07-14T22:39:45.252513915Z" level=info msg="TearDown network for sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\" successfully" Jul 14 22:39:45.252581 containerd[1471]: time="2025-07-14T22:39:45.252542228Z" level=info msg="StopPodSandbox for \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\" returns successfully" Jul 14 22:39:45.253111 containerd[1471]: time="2025-07-14T22:39:45.253075142Z" level=info msg="RemovePodSandbox for \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\"" Jul 14 22:39:45.253111 containerd[1471]: time="2025-07-14T22:39:45.253103866Z" level=info msg="Forcibly stopping sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\"" Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.289 [WARNING][6292] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--4z895-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"b8c6f0a5-a93f-42f8-a410-795cf33b659f", ResourceVersion:"1417", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 38, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"13ceecce3c1972155875ad9e3fe97cc62cff7f0f93f2834ae095489986678e26", Pod:"goldmane-768f4c5c69-4z895", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali10c398e07ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.289 [INFO][6292] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.289 [INFO][6292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" iface="eth0" netns="" Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.289 [INFO][6292] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.289 [INFO][6292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.313 [INFO][6301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" HandleID="k8s-pod-network.53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.313 [INFO][6301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.313 [INFO][6301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.320 [WARNING][6301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" HandleID="k8s-pod-network.53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.321 [INFO][6301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" HandleID="k8s-pod-network.53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Workload="localhost-k8s-goldmane--768f4c5c69--4z895-eth0" Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.322 [INFO][6301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:39:45.329553 containerd[1471]: 2025-07-14 22:39:45.326 [INFO][6292] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9" Jul 14 22:39:45.330120 containerd[1471]: time="2025-07-14T22:39:45.329599881Z" level=info msg="TearDown network for sandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\" successfully" Jul 14 22:39:45.334396 containerd[1471]: time="2025-07-14T22:39:45.334361560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:39:45.334509 containerd[1471]: time="2025-07-14T22:39:45.334441811Z" level=info msg="RemovePodSandbox \"53bbfd6e2ef4edc87597d458aa41598478151350c8dec99b64ad7477e8fe7bc9\" returns successfully" Jul 14 22:39:47.473371 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:47034.service - OpenSSH per-connection server daemon (10.0.0.1:47034). Jul 14 22:39:47.532025 sshd[6310]: Accepted publickey for core from 10.0.0.1 port 47034 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:47.533987 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:47.539236 systemd-logind[1452]: New session 20 of user core. Jul 14 22:39:47.548718 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 22:39:47.990576 sshd[6310]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:47.995143 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:47034.service: Deactivated successfully. Jul 14 22:39:47.997578 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:39:47.998210 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:39:47.999205 systemd-logind[1452]: Removed session 20. Jul 14 22:39:53.012227 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:36680.service - OpenSSH per-connection server daemon (10.0.0.1:36680). Jul 14 22:39:53.051611 sshd[6371]: Accepted publickey for core from 10.0.0.1 port 36680 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:53.053629 sshd[6371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:53.057801 systemd-logind[1452]: New session 21 of user core. Jul 14 22:39:53.068618 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 22:39:53.585028 sshd[6371]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:53.589849 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:36680.service: Deactivated successfully. Jul 14 22:39:53.591798 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:39:53.592395 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:39:53.593315 systemd-logind[1452]: Removed session 21. Jul 14 22:39:58.597909 systemd[1]: Started sshd@21-10.0.0.12:22-10.0.0.1:52098.service - OpenSSH per-connection server daemon (10.0.0.1:52098). Jul 14 22:39:58.642155 sshd[6386]: Accepted publickey for core from 10.0.0.1 port 52098 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:39:58.644139 sshd[6386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:39:58.649025 systemd-logind[1452]: New session 22 of user core. Jul 14 22:39:58.662778 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 22:39:58.777595 sshd[6386]: pam_unix(sshd:session): session closed for user core Jul 14 22:39:58.782373 systemd[1]: sshd@21-10.0.0.12:22-10.0.0.1:52098.service: Deactivated successfully. Jul 14 22:39:58.784891 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:39:58.785661 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:39:58.786870 systemd-logind[1452]: Removed session 22. Jul 14 22:40:03.521508 kubelet[2604]: E0714 22:40:03.521437 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:03.522233 kubelet[2604]: E0714 22:40:03.521681 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:03.792207 systemd[1]: Started sshd@22-10.0.0.12:22-10.0.0.1:52102.service - OpenSSH per-connection server daemon (10.0.0.1:52102). Jul 14 22:40:03.841817 sshd[6400]: Accepted publickey for core from 10.0.0.1 port 52102 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:40:03.843411 sshd[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:40:03.847559 systemd-logind[1452]: New session 23 of user core. Jul 14 22:40:03.858590 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 22:40:04.085537 sshd[6400]: pam_unix(sshd:session): session closed for user core Jul 14 22:40:04.100812 systemd[1]: sshd@22-10.0.0.12:22-10.0.0.1:52102.service: Deactivated successfully. Jul 14 22:40:04.104247 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 22:40:04.106432 systemd-logind[1452]: Session 23 logged out. Waiting for processes to exit. Jul 14 22:40:04.120795 systemd[1]: Started sshd@23-10.0.0.12:22-10.0.0.1:52118.service - OpenSSH per-connection server daemon (10.0.0.1:52118). Jul 14 22:40:04.122162 systemd-logind[1452]: Removed session 23. Jul 14 22:40:04.157142 sshd[6414]: Accepted publickey for core from 10.0.0.1 port 52118 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:40:04.163326 sshd[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:40:04.172586 systemd-logind[1452]: New session 24 of user core. Jul 14 22:40:04.179663 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 22:40:08.523405 kubelet[2604]: E0714 22:40:08.523357 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:09.975748 systemd[1]: run-containerd-runc-k8s.io-783084b34eba557c6e94e38779db394848b1edc7aa0b5e11d81dfa2239bfc9d7-runc.KUQgyl.mount: Deactivated successfully. Jul 14 22:40:11.066948 sshd[6414]: pam_unix(sshd:session): session closed for user core Jul 14 22:40:11.076870 systemd[1]: sshd@23-10.0.0.12:22-10.0.0.1:52118.service: Deactivated successfully. Jul 14 22:40:11.078986 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 22:40:11.080589 systemd-logind[1452]: Session 24 logged out. Waiting for processes to exit. Jul 14 22:40:11.086718 systemd[1]: Started sshd@24-10.0.0.12:22-10.0.0.1:47564.service - OpenSSH per-connection server daemon (10.0.0.1:47564). Jul 14 22:40:11.087773 systemd-logind[1452]: Removed session 24. Jul 14 22:40:11.141355 sshd[6492]: Accepted publickey for core from 10.0.0.1 port 47564 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:40:11.143125 sshd[6492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:40:11.148026 systemd-logind[1452]: New session 25 of user core. Jul 14 22:40:11.152645 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 22:40:13.262950 sshd[6492]: pam_unix(sshd:session): session closed for user core Jul 14 22:40:13.273673 systemd[1]: sshd@24-10.0.0.12:22-10.0.0.1:47564.service: Deactivated successfully. Jul 14 22:40:13.275510 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 22:40:13.277047 systemd-logind[1452]: Session 25 logged out. Waiting for processes to exit. Jul 14 22:40:13.278384 systemd[1]: Started sshd@25-10.0.0.12:22-10.0.0.1:47574.service - OpenSSH per-connection server daemon (10.0.0.1:47574). Jul 14 22:40:13.279408 systemd-logind[1452]: Removed session 25. Jul 14 22:40:13.339177 sshd[6523]: Accepted publickey for core from 10.0.0.1 port 47574 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:40:13.341559 sshd[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:40:13.346486 systemd-logind[1452]: New session 26 of user core. Jul 14 22:40:13.353662 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 22:40:13.522122 kubelet[2604]: E0714 22:40:13.521982 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:14.068940 sshd[6523]: pam_unix(sshd:session): session closed for user core Jul 14 22:40:14.081525 systemd[1]: sshd@25-10.0.0.12:22-10.0.0.1:47574.service: Deactivated successfully. Jul 14 22:40:14.084821 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 22:40:14.086376 systemd-logind[1452]: Session 26 logged out. Waiting for processes to exit. Jul 14 22:40:14.102838 systemd[1]: Started sshd@26-10.0.0.12:22-10.0.0.1:47580.service - OpenSSH per-connection server daemon (10.0.0.1:47580). Jul 14 22:40:14.103754 systemd-logind[1452]: Removed session 26. Jul 14 22:40:14.147692 sshd[6536]: Accepted publickey for core from 10.0.0.1 port 47580 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:40:14.149728 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:40:14.154123 systemd-logind[1452]: New session 27 of user core. Jul 14 22:40:14.159592 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 14 22:40:14.282778 sshd[6536]: pam_unix(sshd:session): session closed for user core Jul 14 22:40:14.290675 systemd-logind[1452]: Session 27 logged out. Waiting for processes to exit. Jul 14 22:40:14.291101 systemd[1]: sshd@26-10.0.0.12:22-10.0.0.1:47580.service: Deactivated successfully. Jul 14 22:40:14.296741 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 22:40:14.301253 systemd-logind[1452]: Removed session 27. Jul 14 22:40:14.523302 kubelet[2604]: E0714 22:40:14.523250 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:19.297653 systemd[1]: Started sshd@27-10.0.0.12:22-10.0.0.1:53874.service - OpenSSH per-connection server daemon (10.0.0.1:53874). Jul 14 22:40:19.345624 sshd[6573]: Accepted publickey for core from 10.0.0.1 port 53874 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:40:19.347487 sshd[6573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:40:19.352243 systemd-logind[1452]: New session 28 of user core. Jul 14 22:40:19.361588 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 14 22:40:19.563131 sshd[6573]: pam_unix(sshd:session): session closed for user core Jul 14 22:40:19.567777 systemd[1]: sshd@27-10.0.0.12:22-10.0.0.1:53874.service: Deactivated successfully. Jul 14 22:40:19.570040 systemd[1]: session-28.scope: Deactivated successfully. Jul 14 22:40:19.570754 systemd-logind[1452]: Session 28 logged out. Waiting for processes to exit. Jul 14 22:40:19.571652 systemd-logind[1452]: Removed session 28. Jul 14 22:40:24.579357 systemd[1]: Started sshd@28-10.0.0.12:22-10.0.0.1:53890.service - OpenSSH per-connection server daemon (10.0.0.1:53890). Jul 14 22:40:24.623819 sshd[6612]: Accepted publickey for core from 10.0.0.1 port 53890 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:40:24.626308 sshd[6612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:40:24.632260 systemd-logind[1452]: New session 29 of user core. Jul 14 22:40:24.642758 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 14 22:40:24.776782 sshd[6612]: pam_unix(sshd:session): session closed for user core Jul 14 22:40:24.781399 systemd[1]: sshd@28-10.0.0.12:22-10.0.0.1:53890.service: Deactivated successfully. Jul 14 22:40:24.784403 systemd[1]: session-29.scope: Deactivated successfully. Jul 14 22:40:24.785682 systemd-logind[1452]: Session 29 logged out. Waiting for processes to exit. Jul 14 22:40:24.787314 systemd-logind[1452]: Removed session 29. Jul 14 22:40:27.522081 kubelet[2604]: E0714 22:40:27.522012 2604 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:40:29.789881 systemd[1]: Started sshd@29-10.0.0.12:22-10.0.0.1:49920.service - OpenSSH per-connection server daemon (10.0.0.1:49920). Jul 14 22:40:29.843073 sshd[6646]: Accepted publickey for core from 10.0.0.1 port 49920 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:40:29.845317 sshd[6646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:40:29.850803 systemd-logind[1452]: New session 30 of user core. Jul 14 22:40:29.855718 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 14 22:40:30.145072 sshd[6646]: pam_unix(sshd:session): session closed for user core Jul 14 22:40:30.149822 systemd[1]: sshd@29-10.0.0.12:22-10.0.0.1:49920.service: Deactivated successfully. Jul 14 22:40:30.153047 systemd[1]: session-30.scope: Deactivated successfully. Jul 14 22:40:30.154484 systemd-logind[1452]: Session 30 logged out. Waiting for processes to exit. Jul 14 22:40:30.155604 systemd-logind[1452]: Removed session 30. Jul 14 22:40:35.169791 systemd[1]: Started sshd@30-10.0.0.12:22-10.0.0.1:49936.service - OpenSSH per-connection server daemon (10.0.0.1:49936). Jul 14 22:40:35.205888 sshd[6661]: Accepted publickey for core from 10.0.0.1 port 49936 ssh2: RSA SHA256:gWwLOAa+n9/kcHCVn0L6qh8UUZHknWXoV+nLrUVphTk Jul 14 22:40:35.207762 sshd[6661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:40:35.212005 systemd-logind[1452]: New session 31 of user core. Jul 14 22:40:35.219749 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 14 22:40:35.353655 sshd[6661]: pam_unix(sshd:session): session closed for user core Jul 14 22:40:35.359258 systemd[1]: sshd@30-10.0.0.12:22-10.0.0.1:49936.service: Deactivated successfully. Jul 14 22:40:35.361768 systemd[1]: session-31.scope: Deactivated successfully. Jul 14 22:40:35.362360 systemd-logind[1452]: Session 31 logged out. Waiting for processes to exit. Jul 14 22:40:35.363311 systemd-logind[1452]: Removed session 31.