Jul 14 22:26:21.140260 kernel: Linux version 6.6.97-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 14 20:23:49 -00 2025 Jul 14 22:26:21.140285 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:26:21.140296 kernel: BIOS-provided physical RAM map: Jul 14 22:26:21.140303 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 14 22:26:21.140309 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 14 22:26:21.140315 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 14 22:26:21.140322 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 14 22:26:21.140329 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 14 22:26:21.140335 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 14 22:26:21.140341 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 14 22:26:21.140350 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 14 22:26:21.140357 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 14 22:26:21.140366 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 14 22:26:21.140373 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 14 22:26:21.140383 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 14 22:26:21.140390 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 14 22:26:21.140400 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 14 22:26:21.140407 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 14 22:26:21.140413 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 14 22:26:21.140420 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 14 22:26:21.140427 kernel: NX (Execute Disable) protection: active Jul 14 22:26:21.140433 kernel: APIC: Static calls initialized Jul 14 22:26:21.140440 kernel: efi: EFI v2.7 by EDK II Jul 14 22:26:21.140447 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Jul 14 22:26:21.140454 kernel: SMBIOS 2.8 present. Jul 14 22:26:21.140460 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 14 22:26:21.140467 kernel: Hypervisor detected: KVM Jul 14 22:26:21.140477 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 14 22:26:21.140483 kernel: kvm-clock: using sched offset of 6261578262 cycles Jul 14 22:26:21.140490 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 14 22:26:21.140498 kernel: tsc: Detected 2794.750 MHz processor Jul 14 22:26:21.140505 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 14 22:26:21.140512 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 14 22:26:21.140519 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 14 22:26:21.140526 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 14 22:26:21.140533 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 14 22:26:21.140542 kernel: Using GB pages for direct mapping Jul 14 22:26:21.140559 kernel: Secure boot disabled Jul 14 22:26:21.140566 kernel: ACPI: Early table checksum verification disabled Jul 14 22:26:21.140574 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 14 22:26:21.140586 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 14 22:26:21.140593 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:26:21.140600 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:26:21.140611 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 14 22:26:21.140618 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:26:21.140628 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:26:21.140635 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:26:21.140643 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 14 22:26:21.140650 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 14 22:26:21.140657 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 14 22:26:21.140668 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 14 22:26:21.140675 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 14 22:26:21.140682 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 14 22:26:21.140689 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 14 22:26:21.140697 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 14 22:26:21.140704 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 14 22:26:21.140711 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 14 22:26:21.140718 kernel: No NUMA configuration found Jul 14 22:26:21.140729 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 14 22:26:21.140739 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 14 22:26:21.140746 kernel: Zone ranges: Jul 14 22:26:21.140754 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 14 22:26:21.140761 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 14 22:26:21.140768 kernel: Normal empty Jul 14 22:26:21.140776 kernel: Movable zone start for each node Jul 14 22:26:21.140783 kernel: Early memory node ranges Jul 14 22:26:21.140790 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 14 22:26:21.140797 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 14 22:26:21.140804 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 14 22:26:21.140814 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 14 22:26:21.140822 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 14 22:26:21.140829 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 14 22:26:21.140838 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 14 22:26:21.140845 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:26:21.140853 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 14 22:26:21.140860 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 14 22:26:21.140867 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 14 22:26:21.140874 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 14 22:26:21.140884 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 14 22:26:21.140891 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 14 22:26:21.140912 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 14 22:26:21.140920 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 14 22:26:21.140927 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 14 22:26:21.140934 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 14 22:26:21.140941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 14 22:26:21.140949 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 14 22:26:21.140956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 14 22:26:21.140966 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 14 22:26:21.140973 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 14 22:26:21.140980 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 14 22:26:21.140987 kernel: TSC deadline timer available Jul 14 22:26:21.140995 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 14 22:26:21.141002 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 14 22:26:21.141009 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 14 22:26:21.141016 kernel: kvm-guest: setup PV sched yield Jul 14 22:26:21.141024 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 14 22:26:21.141033 kernel: Booting paravirtualized kernel on KVM Jul 14 22:26:21.141041 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 14 22:26:21.141048 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 14 22:26:21.141055 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 14 22:26:21.141063 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 14 22:26:21.141070 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 14 22:26:21.141077 kernel: kvm-guest: PV spinlocks enabled Jul 14 22:26:21.141084 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 14 22:26:21.141093 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:26:21.141107 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 14 22:26:21.141114 kernel: random: crng init done Jul 14 22:26:21.141121 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 14 22:26:21.141129 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 14 22:26:21.141136 kernel: Fallback order for Node 0: 0 Jul 14 22:26:21.141143 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 14 22:26:21.141151 kernel: Policy zone: DMA32 Jul 14 22:26:21.141158 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 14 22:26:21.141168 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 166140K reserved, 0K cma-reserved) Jul 14 22:26:21.141176 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 14 22:26:21.141183 kernel: ftrace: allocating 37970 entries in 149 pages Jul 14 22:26:21.141190 kernel: ftrace: allocated 149 pages with 4 groups Jul 14 22:26:21.141197 kernel: Dynamic Preempt: voluntary Jul 14 22:26:21.141213 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 14 22:26:21.141225 kernel: rcu: RCU event tracing is enabled. Jul 14 22:26:21.141235 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 14 22:26:21.141243 kernel: Trampoline variant of Tasks RCU enabled. Jul 14 22:26:21.141251 kernel: Rude variant of Tasks RCU enabled. Jul 14 22:26:21.141261 kernel: Tracing variant of Tasks RCU enabled. Jul 14 22:26:21.141269 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 14 22:26:21.141281 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 14 22:26:21.141289 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 14 22:26:21.141299 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 14 22:26:21.141309 kernel: Console: colour dummy device 80x25 Jul 14 22:26:21.141320 kernel: printk: console [ttyS0] enabled Jul 14 22:26:21.141340 kernel: ACPI: Core revision 20230628 Jul 14 22:26:21.141350 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 14 22:26:21.141361 kernel: APIC: Switch to symmetric I/O mode setup Jul 14 22:26:21.141371 kernel: x2apic enabled Jul 14 22:26:21.141381 kernel: APIC: Switched APIC routing to: physical x2apic Jul 14 22:26:21.141391 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 14 22:26:21.141401 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 14 22:26:21.141411 kernel: kvm-guest: setup PV IPIs Jul 14 22:26:21.141421 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 14 22:26:21.141433 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 14 22:26:21.141441 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 14 22:26:21.141449 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 14 22:26:21.141457 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 14 22:26:21.141464 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 14 22:26:21.141472 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 14 22:26:21.141480 kernel: Spectre V2 : Mitigation: Retpolines Jul 14 22:26:21.141488 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 14 22:26:21.141495 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 14 22:26:21.141506 kernel: RETBleed: Mitigation: untrained return thunk Jul 14 22:26:21.141514 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 14 22:26:21.141521 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 14 22:26:21.141529 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 14 22:26:21.141542 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 14 22:26:21.141559 kernel: x86/bugs: return thunk changed Jul 14 22:26:21.141567 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 14 22:26:21.141576 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 14 22:26:21.141587 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 14 22:26:21.141594 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 14 22:26:21.141602 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 14 22:26:21.141610 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 14 22:26:21.141617 kernel: Freeing SMP alternatives memory: 32K Jul 14 22:26:21.141625 kernel: pid_max: default: 32768 minimum: 301 Jul 14 22:26:21.141632 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 14 22:26:21.141640 kernel: landlock: Up and running. Jul 14 22:26:21.141647 kernel: SELinux: Initializing. Jul 14 22:26:21.141658 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:26:21.141665 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 14 22:26:21.141673 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 14 22:26:21.141681 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:26:21.141688 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:26:21.141697 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 14 22:26:21.141707 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 14 22:26:21.141724 kernel: ... version: 0 Jul 14 22:26:21.141734 kernel: ... bit width: 48 Jul 14 22:26:21.141749 kernel: ... generic registers: 6 Jul 14 22:26:21.141759 kernel: ... value mask: 0000ffffffffffff Jul 14 22:26:21.141769 kernel: ... max period: 00007fffffffffff Jul 14 22:26:21.141778 kernel: ... fixed-purpose events: 0 Jul 14 22:26:21.141788 kernel: ... event mask: 000000000000003f Jul 14 22:26:21.141798 kernel: signal: max sigframe size: 1776 Jul 14 22:26:21.141808 kernel: rcu: Hierarchical SRCU implementation. Jul 14 22:26:21.141818 kernel: rcu: Max phase no-delay instances is 400. Jul 14 22:26:21.141828 kernel: smp: Bringing up secondary CPUs ... Jul 14 22:26:21.141839 kernel: smpboot: x86: Booting SMP configuration: Jul 14 22:26:21.141846 kernel: .... node #0, CPUs: #1 #2 #3 Jul 14 22:26:21.141854 kernel: smp: Brought up 1 node, 4 CPUs Jul 14 22:26:21.141861 kernel: smpboot: Max logical packages: 1 Jul 14 22:26:21.141869 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 14 22:26:21.141877 kernel: devtmpfs: initialized Jul 14 22:26:21.141884 kernel: x86/mm: Memory block size: 128MB Jul 14 22:26:21.141892 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 14 22:26:21.141929 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 14 22:26:21.141936 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 14 22:26:21.141948 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 14 22:26:21.141956 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 14 22:26:21.141965 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 14 22:26:21.141973 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 14 22:26:21.141981 kernel: pinctrl core: initialized pinctrl subsystem Jul 14 22:26:21.141989 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 14 22:26:21.141997 kernel: audit: initializing netlink subsys (disabled) Jul 14 22:26:21.142005 kernel: audit: type=2000 audit(1752531980.278:1): state=initialized audit_enabled=0 res=1 Jul 14 22:26:21.142015 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 14 22:26:21.142023 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 14 22:26:21.142031 kernel: cpuidle: using governor menu Jul 14 22:26:21.142038 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 14 22:26:21.142046 kernel: dca service started, version 1.12.1 Jul 14 22:26:21.142054 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 14 22:26:21.142062 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 14 22:26:21.142069 kernel: PCI: Using configuration type 1 for base access Jul 14 22:26:21.142077 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 14 22:26:21.142087 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 14 22:26:21.142095 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 14 22:26:21.142103 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 14 22:26:21.142110 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 14 22:26:21.142118 kernel: ACPI: Added _OSI(Module Device) Jul 14 22:26:21.142125 kernel: ACPI: Added _OSI(Processor Device) Jul 14 22:26:21.142133 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 14 22:26:21.142141 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 14 22:26:21.142148 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 14 22:26:21.142159 kernel: ACPI: Interpreter enabled Jul 14 22:26:21.142167 kernel: ACPI: PM: (supports S0 S3 S5) Jul 14 22:26:21.142175 kernel: ACPI: Using IOAPIC for interrupt routing Jul 14 22:26:21.142183 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 14 22:26:21.142190 kernel: PCI: Using E820 reservations for host bridge windows Jul 14 22:26:21.142198 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 14 22:26:21.142207 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 14 22:26:21.142442 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 14 22:26:21.142598 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 14 22:26:21.142745 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 14 22:26:21.142759 kernel: PCI host bridge to bus 0000:00 Jul 14 22:26:21.142953 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 14 22:26:21.143079 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 14 22:26:21.143198 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 14 22:26:21.143314 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 14 22:26:21.143436 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 14 22:26:21.143562 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 14 22:26:21.143681 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 14 22:26:21.143844 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 14 22:26:21.144009 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 14 22:26:21.144141 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 14 22:26:21.144276 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 14 22:26:21.144403 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 14 22:26:21.144531 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 14 22:26:21.144671 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 14 22:26:21.144820 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 14 22:26:21.144967 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 14 22:26:21.145099 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 14 22:26:21.145234 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 14 22:26:21.145495 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 14 22:26:21.145640 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 14 22:26:21.145770 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 14 22:26:21.145920 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 14 22:26:21.146080 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 14 22:26:21.146232 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 14 22:26:21.146395 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 14 22:26:21.146526 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 14 22:26:21.146668 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 14 22:26:21.146818 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 14 22:26:21.146967 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 14 22:26:21.147117 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 14 22:26:21.147247 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 14 22:26:21.147382 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 14 22:26:21.147539 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 14 22:26:21.147679 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 14 22:26:21.147690 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 14 22:26:21.147698 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 14 22:26:21.147707 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 14 22:26:21.147714 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 14 22:26:21.147722 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 14 22:26:21.147735 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 14 22:26:21.147742 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 14 22:26:21.147750 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 14 22:26:21.147758 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 14 22:26:21.147766 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 14 22:26:21.147773 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 14 22:26:21.147781 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 14 22:26:21.147789 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 14 22:26:21.147797 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 14 22:26:21.147808 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 14 22:26:21.147816 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 14 22:26:21.147823 kernel: iommu: Default domain type: Translated Jul 14 22:26:21.147831 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 14 22:26:21.147839 kernel: efivars: Registered efivars operations Jul 14 22:26:21.147847 kernel: PCI: Using ACPI for IRQ routing Jul 14 22:26:21.147855 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 14 22:26:21.147863 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 14 22:26:21.147871 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 14 22:26:21.147882 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 14 22:26:21.147889 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 14 22:26:21.148052 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 14 22:26:21.148180 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 14 22:26:21.148307 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 14 22:26:21.148317 kernel: vgaarb: loaded Jul 14 22:26:21.148325 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 14 22:26:21.148333 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 14 22:26:21.148345 kernel: clocksource: Switched to clocksource kvm-clock Jul 14 22:26:21.148353 kernel: VFS: Disk quotas dquot_6.6.0 Jul 14 22:26:21.148361 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 14 22:26:21.148369 kernel: pnp: PnP ACPI init Jul 14 22:26:21.148525 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 14 22:26:21.148538 kernel: pnp: PnP ACPI: found 6 devices Jul 14 22:26:21.148546 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 14 22:26:21.148566 kernel: NET: Registered PF_INET protocol family Jul 14 22:26:21.148575 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 14 22:26:21.148587 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 14 22:26:21.148595 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 14 22:26:21.148603 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 14 22:26:21.148611 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 14 22:26:21.148619 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 14 22:26:21.148626 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:26:21.148634 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 14 22:26:21.148642 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 14 22:26:21.148653 kernel: NET: Registered PF_XDP protocol family Jul 14 22:26:21.148785 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 14 22:26:21.148928 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 14 22:26:21.149052 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 14 22:26:21.149170 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 14 22:26:21.149288 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 14 22:26:21.149405 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 14 22:26:21.149522 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 14 22:26:21.149678 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 14 22:26:21.149691 kernel: PCI: CLS 0 bytes, default 64 Jul 14 22:26:21.149701 kernel: Initialise system trusted keyrings Jul 14 22:26:21.149709 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 14 22:26:21.149717 kernel: Key type asymmetric registered Jul 14 22:26:21.149725 kernel: Asymmetric key parser 'x509' registered Jul 14 22:26:21.149733 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 14 22:26:21.149741 kernel: io scheduler mq-deadline registered Jul 14 22:26:21.149755 kernel: io scheduler kyber registered Jul 14 22:26:21.149770 kernel: io scheduler bfq registered Jul 14 22:26:21.149778 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 14 22:26:21.149787 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 14 22:26:21.149795 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 14 22:26:21.149803 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 14 22:26:21.149811 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 14 22:26:21.149819 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 14 22:26:21.149827 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 14 22:26:21.149835 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 14 22:26:21.149845 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 14 22:26:21.149853 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 14 22:26:21.150197 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 14 22:26:21.150324 kernel: rtc_cmos 00:04: registered as rtc0 Jul 14 22:26:21.150446 kernel: rtc_cmos 00:04: setting system clock to 2025-07-14T22:26:20 UTC (1752531980) Jul 14 22:26:21.150577 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 14 22:26:21.150588 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 14 22:26:21.150596 kernel: efifb: probing for efifb Jul 14 22:26:21.150609 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 14 22:26:21.150617 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 14 22:26:21.150626 kernel: efifb: scrolling: redraw Jul 14 22:26:21.150633 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 14 22:26:21.150641 kernel: Console: switching to colour frame buffer device 100x37 Jul 14 22:26:21.150650 kernel: fb0: EFI VGA frame buffer device Jul 14 22:26:21.150679 kernel: pstore: Using crash dump compression: deflate Jul 14 22:26:21.150689 kernel: pstore: Registered efi_pstore as persistent store backend Jul 14 22:26:21.150698 kernel: NET: Registered PF_INET6 protocol family Jul 14 22:26:21.150711 kernel: Segment Routing with IPv6 Jul 14 22:26:21.150719 kernel: In-situ OAM (IOAM) with IPv6 Jul 14 22:26:21.150727 kernel: NET: Registered PF_PACKET protocol family Jul 14 22:26:21.150735 kernel: Key type dns_resolver registered Jul 14 22:26:21.150743 kernel: IPI shorthand broadcast: enabled Jul 14 22:26:21.150752 kernel: sched_clock: Marking stable (1335004976, 155543497)->(1571970423, -81421950) Jul 14 22:26:21.150760 kernel: registered taskstats version 1 Jul 14 22:26:21.150768 kernel: Loading compiled-in X.509 certificates Jul 14 22:26:21.150776 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.97-flatcar: ff10e110ca3923b510cf0133f4e9f48dd636b870' Jul 14 22:26:21.150787 kernel: Key type .fscrypt registered Jul 14 22:26:21.150795 kernel: Key type fscrypt-provisioning registered Jul 14 22:26:21.150803 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 14 22:26:21.150811 kernel: ima: Allocated hash algorithm: sha1 Jul 14 22:26:21.150819 kernel: ima: No architecture policies found Jul 14 22:26:21.150827 kernel: clk: Disabling unused clocks Jul 14 22:26:21.150836 kernel: Freeing unused kernel image (initmem) memory: 42876K Jul 14 22:26:21.150844 kernel: Write protecting the kernel read-only data: 36864k Jul 14 22:26:21.150852 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 14 22:26:21.150862 kernel: Run /init as init process Jul 14 22:26:21.150871 kernel: with arguments: Jul 14 22:26:21.150879 kernel: /init Jul 14 22:26:21.150887 kernel: with environment: Jul 14 22:26:21.150908 kernel: HOME=/ Jul 14 22:26:21.150916 kernel: TERM=linux Jul 14 22:26:21.150924 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 14 22:26:21.150935 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:26:21.150949 systemd[1]: Detected virtualization kvm. Jul 14 22:26:21.150958 systemd[1]: Detected architecture x86-64. Jul 14 22:26:21.150966 systemd[1]: Running in initrd. Jul 14 22:26:21.150974 systemd[1]: No hostname configured, using default hostname. Jul 14 22:26:21.150983 systemd[1]: Hostname set to . Jul 14 22:26:21.150996 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:26:21.151005 systemd[1]: Queued start job for default target initrd.target. Jul 14 22:26:21.151014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:26:21.151022 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:26:21.151031 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 14 22:26:21.151040 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:26:21.151049 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 14 22:26:21.151060 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 14 22:26:21.151071 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 14 22:26:21.151079 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 14 22:26:21.151088 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:26:21.151097 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:26:21.151105 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:26:21.151114 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:26:21.151125 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:26:21.151134 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:26:21.151142 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:26:21.151151 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:26:21.151160 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:26:21.151168 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:26:21.151177 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:26:21.151186 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:26:21.151194 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:26:21.151206 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:26:21.151214 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 14 22:26:21.151227 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:26:21.151236 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 14 22:26:21.151244 systemd[1]: Starting systemd-fsck-usr.service... Jul 14 22:26:21.151253 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:26:21.151261 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:26:21.151270 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:26:21.151281 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 14 22:26:21.151289 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:26:21.151298 systemd[1]: Finished systemd-fsck-usr.service. Jul 14 22:26:21.151332 systemd-journald[194]: Collecting audit messages is disabled. Jul 14 22:26:21.151356 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:26:21.151365 systemd-journald[194]: Journal started Jul 14 22:26:21.151383 systemd-journald[194]: Runtime Journal (/run/log/journal/d1880ddd92f84ddc91f129cd0f5e2675) is 6.0M, max 48.3M, 42.2M free. Jul 14 22:26:21.244270 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:26:21.246132 systemd-modules-load[195]: Inserted module 'overlay' Jul 14 22:26:21.246483 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:26:21.249223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:26:21.274245 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:26:21.333722 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:26:21.337235 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 14 22:26:21.335349 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:26:21.344237 kernel: Bridge firewalling registered Jul 14 22:26:21.343122 systemd-modules-load[195]: Inserted module 'br_netfilter' Jul 14 22:26:21.344937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:26:21.349281 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:26:21.353376 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:26:21.429834 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:26:21.432821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:26:21.458166 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 14 22:26:21.458842 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:26:21.463389 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:26:21.477400 dracut-cmdline[227]: dracut-dracut-053 Jul 14 22:26:21.491680 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bfa97d577a2baa7448b0ab2cae71f1606bd0084ffae5b72cc7eef5122a2ca497 Jul 14 22:26:21.534015 systemd-resolved[230]: Positive Trust Anchors: Jul 14 22:26:21.534039 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:26:21.534082 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:26:21.537453 systemd-resolved[230]: Defaulting to hostname 'linux'. Jul 14 22:26:21.539088 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:26:21.545123 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:26:21.628959 kernel: SCSI subsystem initialized Jul 14 22:26:21.694947 kernel: Loading iSCSI transport class v2.0-870. Jul 14 22:26:21.705934 kernel: iscsi: registered transport (tcp) Jul 14 22:26:21.729961 kernel: iscsi: registered transport (qla4xxx) Jul 14 22:26:21.730060 kernel: QLogic iSCSI HBA Driver Jul 14 22:26:21.789752 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 14 22:26:21.801120 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 14 22:26:21.850931 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 14 22:26:21.850970 kernel: device-mapper: uevent: version 1.0.3 Jul 14 22:26:21.852887 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 14 22:26:21.897961 kernel: raid6: avx2x4 gen() 23890 MB/s Jul 14 22:26:21.999297 kernel: raid6: avx2x2 gen() 28123 MB/s Jul 14 22:26:22.016037 kernel: raid6: avx2x1 gen() 23066 MB/s Jul 14 22:26:22.016103 kernel: raid6: using algorithm avx2x2 gen() 28123 MB/s Jul 14 22:26:22.102970 kernel: raid6: .... xor() 16719 MB/s, rmw enabled Jul 14 22:26:22.103058 kernel: raid6: using avx2x2 recovery algorithm Jul 14 22:26:22.165011 kernel: xor: automatically using best checksumming function avx Jul 14 22:26:22.364948 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 14 22:26:22.380723 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:26:22.395469 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:26:22.414326 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jul 14 22:26:22.419733 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:26:22.479069 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 14 22:26:22.516937 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Jul 14 22:26:22.533067 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:26:22.545281 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:26:22.623655 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:26:22.649233 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 14 22:26:22.664102 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 14 22:26:22.673789 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 14 22:26:22.671698 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:26:22.674265 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:26:22.678491 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:26:22.682725 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 14 22:26:22.687924 kernel: cryptd: max_cpu_qlen set to 1000 Jul 14 22:26:22.691158 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 14 22:26:22.704516 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 14 22:26:22.704584 kernel: GPT:9289727 != 19775487 Jul 14 22:26:22.704600 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 14 22:26:22.704616 kernel: GPT:9289727 != 19775487 Jul 14 22:26:22.704630 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 14 22:26:22.704645 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:26:22.710464 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:26:22.717008 kernel: AVX2 version of gcm_enc/dec engaged. Jul 14 22:26:22.717925 kernel: libata version 3.00 loaded. Jul 14 22:26:22.720924 kernel: AES CTR mode by8 optimization enabled Jul 14 22:26:22.729462 kernel: ahci 0000:00:1f.2: version 3.0 Jul 14 22:26:22.729809 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 14 22:26:22.737623 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 14 22:26:22.737963 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 14 22:26:22.735561 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:26:22.742128 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Jul 14 22:26:22.735787 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:26:22.739314 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:26:22.748456 kernel: BTRFS: device fsid d23b6972-ad36-4741-bf36-4d440b923127 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (468) Jul 14 22:26:22.748559 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:26:22.749947 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:26:22.752039 kernel: scsi host0: ahci Jul 14 22:26:22.752297 kernel: scsi host1: ahci Jul 14 22:26:22.753402 kernel: scsi host2: ahci Jul 14 22:26:22.754013 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:26:22.760165 kernel: scsi host3: ahci Jul 14 22:26:22.760446 kernel: scsi host4: ahci Jul 14 22:26:22.760680 kernel: scsi host5: ahci Jul 14 22:26:22.760922 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 14 22:26:22.760941 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 14 22:26:22.760955 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 14 22:26:22.760969 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 14 22:26:22.760982 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 14 22:26:22.762599 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 14 22:26:22.765344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:26:22.784490 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 14 22:26:22.837159 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:26:22.846569 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 14 22:26:22.861991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:26:22.868810 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 14 22:26:22.926078 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 14 22:26:22.946258 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 14 22:26:22.950064 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 14 22:26:22.973537 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:26:23.089948 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 14 22:26:23.090030 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 14 22:26:23.090042 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 14 22:26:23.091953 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 14 22:26:23.092043 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 14 22:26:23.092950 kernel: ata3.00: applying bridge limits Jul 14 22:26:23.093942 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 14 22:26:23.093994 kernel: ata3.00: configured for UDMA/100 Jul 14 22:26:23.094955 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 14 22:26:23.113953 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 14 22:26:23.145934 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 14 22:26:23.146265 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 14 22:26:23.160078 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 14 22:26:23.873131 disk-uuid[559]: Primary Header is updated. Jul 14 22:26:23.873131 disk-uuid[559]: Secondary Entries is updated. Jul 14 22:26:23.873131 disk-uuid[559]: Secondary Header is updated. Jul 14 22:26:23.918925 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:26:23.923939 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:26:24.996937 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 14 22:26:24.997389 disk-uuid[581]: The operation has completed successfully. Jul 14 22:26:25.031969 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 14 22:26:25.032129 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 14 22:26:25.082134 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 14 22:26:25.088359 sh[592]: Success Jul 14 22:26:25.107950 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 14 22:26:25.145773 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 14 22:26:25.179497 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 14 22:26:25.182119 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 14 22:26:25.196202 kernel: BTRFS info (device dm-0): first mount of filesystem d23b6972-ad36-4741-bf36-4d440b923127 Jul 14 22:26:25.196264 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:26:25.196281 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 14 22:26:25.197325 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 14 22:26:25.198130 kernel: BTRFS info (device dm-0): using free space tree Jul 14 22:26:25.204377 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 14 22:26:25.205681 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 14 22:26:25.211115 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 14 22:26:25.213528 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 14 22:26:25.225032 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:26:25.225070 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:26:25.225082 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:26:25.248986 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:26:25.262346 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 14 22:26:25.264561 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:26:25.343787 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:26:25.393300 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:26:25.421127 systemd-networkd[770]: lo: Link UP Jul 14 22:26:25.421138 systemd-networkd[770]: lo: Gained carrier Jul 14 22:26:25.423016 systemd-networkd[770]: Enumeration completed Jul 14 22:26:25.423473 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:26:25.423477 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:26:25.455067 systemd-networkd[770]: eth0: Link UP Jul 14 22:26:25.455074 systemd-networkd[770]: eth0: Gained carrier Jul 14 22:26:25.455091 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:26:25.456353 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:26:25.460567 systemd[1]: Reached target network.target - Network. Jul 14 22:26:25.485031 systemd-networkd[770]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:26:25.809396 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 14 22:26:25.822143 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 14 22:26:25.889342 ignition[775]: Ignition 2.19.0 Jul 14 22:26:25.889355 ignition[775]: Stage: fetch-offline Jul 14 22:26:25.889403 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:26:25.889424 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:26:25.889536 ignition[775]: parsed url from cmdline: "" Jul 14 22:26:25.889540 ignition[775]: no config URL provided Jul 14 22:26:25.889545 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jul 14 22:26:25.889555 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jul 14 22:26:25.889597 ignition[775]: op(1): [started] loading QEMU firmware config module Jul 14 22:26:25.889603 ignition[775]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 14 22:26:25.905971 ignition[775]: op(1): [finished] loading QEMU firmware config module Jul 14 22:26:25.952049 ignition[775]: parsing config with SHA512: 612cfb6ce7a0ec62acd9d531f978d74cac52d2f917f55f540c04d36a8d4eafd727584f3cd8d19af5f1e334325a0da6cd6f44bb67e733e72a55fd6b947c4e1088 Jul 14 22:26:25.957978 unknown[775]: fetched base config from "system" Jul 14 22:26:25.957995 unknown[775]: fetched user config from "qemu" Jul 14 22:26:25.958560 ignition[775]: fetch-offline: fetch-offline passed Jul 14 22:26:25.958655 ignition[775]: Ignition finished successfully Jul 14 22:26:25.961614 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:26:25.963257 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 14 22:26:25.972153 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 14 22:26:25.995335 ignition[784]: Ignition 2.19.0 Jul 14 22:26:25.995345 ignition[784]: Stage: kargs Jul 14 22:26:25.995569 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:26:25.995582 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:26:25.996635 ignition[784]: kargs: kargs passed Jul 14 22:26:25.996688 ignition[784]: Ignition finished successfully Jul 14 22:26:26.003872 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 14 22:26:26.007144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 14 22:26:26.028882 ignition[793]: Ignition 2.19.0 Jul 14 22:26:26.028912 ignition[793]: Stage: disks Jul 14 22:26:26.029085 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jul 14 22:26:26.029099 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:26:26.030071 ignition[793]: disks: disks passed Jul 14 22:26:26.030119 ignition[793]: Ignition finished successfully Jul 14 22:26:26.055339 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 14 22:26:26.055983 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 14 22:26:26.057938 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:26:26.058403 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:26:26.058739 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:26:26.059244 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:26:26.076094 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 14 22:26:26.093093 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 14 22:26:26.573942 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 14 22:26:26.631140 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 14 22:26:26.755947 kernel: EXT4-fs (vda9): mounted filesystem dda007d3-640b-4d11-976f-3b761ca7aabd r/w with ordered data mode. Quota mode: none. Jul 14 22:26:26.756263 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 14 22:26:26.758466 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 14 22:26:26.769996 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:26:26.772766 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 14 22:26:26.775402 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 14 22:26:26.775453 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 14 22:26:26.790214 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (812) Jul 14 22:26:26.790249 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:26:26.790261 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:26:26.790272 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:26:26.775478 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:26:26.792937 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:26:26.794096 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 14 22:26:26.796061 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:26:26.799773 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 14 22:26:26.843710 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jul 14 22:26:26.849087 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jul 14 22:26:26.855590 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jul 14 22:26:26.866927 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jul 14 22:26:26.969302 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 14 22:26:26.984073 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 14 22:26:26.987295 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 14 22:26:26.995541 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 14 22:26:27.001917 kernel: BTRFS info (device vda6): last unmount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:26:27.007156 systemd-networkd[770]: eth0: Gained IPv6LL Jul 14 22:26:27.020042 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 14 22:26:27.080964 ignition[929]: INFO : Ignition 2.19.0 Jul 14 22:26:27.080964 ignition[929]: INFO : Stage: mount Jul 14 22:26:27.082816 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:26:27.082816 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:26:27.082816 ignition[929]: INFO : mount: mount passed Jul 14 22:26:27.082816 ignition[929]: INFO : Ignition finished successfully Jul 14 22:26:27.090778 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 14 22:26:27.109059 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 14 22:26:27.116322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 14 22:26:27.131533 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Jul 14 22:26:27.131595 kernel: BTRFS info (device vda6): first mount of filesystem 1f379987-f438-494c-89f9-63473ca1b18d Jul 14 22:26:27.131612 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 14 22:26:27.132350 kernel: BTRFS info (device vda6): using free space tree Jul 14 22:26:27.137968 kernel: BTRFS info (device vda6): auto enabling async discard Jul 14 22:26:27.140118 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 14 22:26:27.175447 ignition[957]: INFO : Ignition 2.19.0 Jul 14 22:26:27.175447 ignition[957]: INFO : Stage: files Jul 14 22:26:27.178180 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:26:27.178180 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:26:27.178180 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jul 14 22:26:27.178180 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 14 22:26:27.178180 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 14 22:26:27.198489 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 14 22:26:27.200637 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 14 22:26:27.200637 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 14 22:26:27.199614 unknown[957]: wrote ssh authorized keys file for user: core Jul 14 22:26:27.206148 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:26:27.206148 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 14 22:26:27.206148 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:26:27.206148 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 14 22:26:27.262939 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 14 22:26:27.457200 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 14 22:26:27.457200 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:26:27.467155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 14 22:26:37.831874 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 14 22:26:38.394171 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 14 22:26:38.394171 ignition[957]: INFO : files: op(c): [started] processing unit "containerd.service" Jul 14 22:26:38.426415 ignition[957]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:26:38.429588 ignition[957]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 14 22:26:38.429588 ignition[957]: INFO : files: op(c): [finished] processing unit "containerd.service" Jul 14 22:26:38.429588 ignition[957]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jul 14 22:26:38.434755 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:26:38.436783 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 14 22:26:38.436783 ignition[957]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jul 14 22:26:38.436783 ignition[957]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jul 14 22:26:38.441589 ignition[957]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:26:38.441589 ignition[957]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 14 22:26:38.446835 ignition[957]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jul 14 22:26:38.446835 ignition[957]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jul 14 22:26:38.476618 ignition[957]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:26:38.484965 ignition[957]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 14 22:26:38.487197 ignition[957]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jul 14 22:26:38.487197 ignition[957]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jul 14 22:26:38.487197 ignition[957]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jul 14 22:26:38.491920 ignition[957]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:26:38.493659 ignition[957]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 14 22:26:38.495495 ignition[957]: INFO : files: files passed Jul 14 22:26:38.496336 ignition[957]: INFO : Ignition finished successfully Jul 14 22:26:38.499737 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 14 22:26:38.511282 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 14 22:26:38.514600 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 14 22:26:38.519355 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 14 22:26:38.520469 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 14 22:26:38.527750 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jul 14 22:26:38.532411 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:26:38.532411 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:26:38.535693 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 14 22:26:38.536020 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:26:38.606541 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 14 22:26:38.623285 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 14 22:26:38.655537 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 14 22:26:38.655731 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 14 22:26:38.658336 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 14 22:26:38.793928 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 14 22:26:38.795344 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 14 22:26:38.806080 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 14 22:26:38.822101 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:26:38.836091 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 14 22:26:38.846678 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:26:39.021406 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:26:39.021831 systemd[1]: Stopped target timers.target - Timer Units. Jul 14 22:26:39.022184 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 14 22:26:39.022355 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 14 22:26:39.022886 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 14 22:26:39.023248 systemd[1]: Stopped target basic.target - Basic System. Jul 14 22:26:39.023561 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 14 22:26:39.023886 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 14 22:26:39.024274 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 14 22:26:39.064777 ignition[1011]: INFO : Ignition 2.19.0 Jul 14 22:26:39.064777 ignition[1011]: INFO : Stage: umount Jul 14 22:26:39.064777 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 14 22:26:39.064777 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 14 22:26:39.064777 ignition[1011]: INFO : umount: umount passed Jul 14 22:26:39.064777 ignition[1011]: INFO : Ignition finished successfully Jul 14 22:26:39.024588 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 14 22:26:39.024927 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 14 22:26:39.025264 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 14 22:26:39.025594 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 14 22:26:39.025951 systemd[1]: Stopped target swap.target - Swaps. Jul 14 22:26:39.026298 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 14 22:26:39.026453 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 14 22:26:39.027186 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:26:39.027511 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:26:39.027825 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 14 22:26:39.028087 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:26:39.028357 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 14 22:26:39.028519 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 14 22:26:39.029184 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 14 22:26:39.029299 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 14 22:26:39.029604 systemd[1]: Stopped target paths.target - Path Units. Jul 14 22:26:39.029862 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 14 22:26:39.036084 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:26:39.036567 systemd[1]: Stopped target slices.target - Slice Units. Jul 14 22:26:39.036985 systemd[1]: Stopped target sockets.target - Socket Units. Jul 14 22:26:39.037332 systemd[1]: iscsid.socket: Deactivated successfully. Jul 14 22:26:39.037440 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 14 22:26:39.037810 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 14 22:26:39.037922 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 14 22:26:39.038325 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 14 22:26:39.038452 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 14 22:26:39.038795 systemd[1]: ignition-files.service: Deactivated successfully. Jul 14 22:26:39.038921 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 14 22:26:39.040362 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 14 22:26:39.041684 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 14 22:26:39.042192 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 14 22:26:39.042347 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:26:39.042782 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 14 22:26:39.042940 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 14 22:26:39.048192 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 14 22:26:39.048341 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 14 22:26:39.066776 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 14 22:26:39.066944 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 14 22:26:39.068774 systemd[1]: Stopped target network.target - Network. Jul 14 22:26:39.070659 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 14 22:26:39.070717 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 14 22:26:39.072685 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 14 22:26:39.072736 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 14 22:26:39.074555 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 14 22:26:39.074621 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 14 22:26:39.076604 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 14 22:26:39.076665 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 14 22:26:39.079098 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 14 22:26:39.081167 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 14 22:26:39.176999 systemd-networkd[770]: eth0: DHCPv6 lease lost Jul 14 22:26:39.177469 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 14 22:26:39.179403 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 14 22:26:39.179573 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 14 22:26:39.182702 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 14 22:26:39.182863 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 14 22:26:39.185991 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 14 22:26:39.186118 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 14 22:26:39.189090 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 14 22:26:39.189190 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:26:39.191104 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 14 22:26:39.191164 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 14 22:26:39.202041 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 14 22:26:39.203695 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 14 22:26:39.203756 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 14 22:26:39.206158 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 14 22:26:39.206221 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:26:39.208916 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 14 22:26:39.208967 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 14 22:26:39.210316 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 14 22:26:39.210372 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:26:39.213289 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:26:39.225033 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 14 22:26:39.225246 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:26:39.227170 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 14 22:26:39.227290 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 14 22:26:39.230219 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 14 22:26:39.230325 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 14 22:26:39.231881 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 14 22:26:39.231948 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:26:39.234269 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 14 22:26:39.234325 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 14 22:26:39.236637 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 14 22:26:39.236692 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 14 22:26:39.238881 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 14 22:26:39.238949 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 14 22:26:39.257082 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 14 22:26:39.258829 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 14 22:26:39.258906 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:26:39.261415 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 14 22:26:39.261473 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:26:39.263879 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 14 22:26:39.263947 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:26:39.265409 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:26:39.265462 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:26:39.380695 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 14 22:26:39.380824 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 14 22:26:39.383572 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 14 22:26:39.396078 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 14 22:26:39.417603 systemd[1]: Switching root. Jul 14 22:26:39.508610 systemd-journald[194]: Journal stopped Jul 14 22:26:44.337415 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jul 14 22:26:44.337500 kernel: SELinux: policy capability network_peer_controls=1 Jul 14 22:26:44.337522 kernel: SELinux: policy capability open_perms=1 Jul 14 22:26:44.337534 kernel: SELinux: policy capability extended_socket_class=1 Jul 14 22:26:44.337546 kernel: SELinux: policy capability always_check_network=0 Jul 14 22:26:44.337558 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 14 22:26:44.337570 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 14 22:26:44.337585 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 14 22:26:44.337602 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 14 22:26:44.337615 kernel: audit: type=1403 audit(1752532002.992:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 14 22:26:44.337628 systemd[1]: Successfully loaded SELinux policy in 87.697ms. Jul 14 22:26:44.337681 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.980ms. Jul 14 22:26:44.337721 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 14 22:26:44.337754 systemd[1]: Detected virtualization kvm. Jul 14 22:26:44.337769 systemd[1]: Detected architecture x86-64. Jul 14 22:26:44.337799 systemd[1]: Detected first boot. Jul 14 22:26:44.337819 systemd[1]: Initializing machine ID from VM UUID. Jul 14 22:26:44.337835 zram_generator::config[1071]: No configuration found. Jul 14 22:26:44.337849 systemd[1]: Populated /etc with preset unit settings. Jul 14 22:26:44.337862 systemd[1]: Queued start job for default target multi-user.target. Jul 14 22:26:44.337882 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 14 22:26:44.337915 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 14 22:26:44.337929 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 14 22:26:44.337942 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 14 22:26:44.337958 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 14 22:26:44.337977 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 14 22:26:44.337999 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 14 22:26:44.338012 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 14 22:26:44.338024 systemd[1]: Created slice user.slice - User and Session Slice. Jul 14 22:26:44.338037 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 14 22:26:44.338050 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 14 22:26:44.338062 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 14 22:26:44.338075 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 14 22:26:44.338091 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 14 22:26:44.338104 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 14 22:26:44.338116 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 14 22:26:44.338129 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 14 22:26:44.338141 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 14 22:26:44.338154 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 14 22:26:44.338166 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 14 22:26:44.338179 systemd[1]: Reached target slices.target - Slice Units. Jul 14 22:26:44.338194 systemd[1]: Reached target swap.target - Swaps. Jul 14 22:26:44.338209 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 14 22:26:44.338221 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 14 22:26:44.338234 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 14 22:26:44.338247 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 14 22:26:44.338259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 14 22:26:44.338272 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 14 22:26:44.338284 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 14 22:26:44.338296 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 14 22:26:44.338311 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 14 22:26:44.338323 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 14 22:26:44.338335 systemd[1]: Mounting media.mount - External Media Directory... Jul 14 22:26:44.338348 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:26:44.338361 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 14 22:26:44.338373 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 14 22:26:44.338385 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 14 22:26:44.338398 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 14 22:26:44.338411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:26:44.338426 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 14 22:26:44.338439 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 14 22:26:44.338451 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:26:44.338464 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:26:44.338477 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:26:44.338490 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 14 22:26:44.338502 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:26:44.338515 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 14 22:26:44.338531 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 14 22:26:44.338544 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 14 22:26:44.338556 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 14 22:26:44.338591 systemd-journald[1148]: Collecting audit messages is disabled. Jul 14 22:26:44.338618 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 14 22:26:44.338630 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 14 22:26:44.338643 systemd-journald[1148]: Journal started Jul 14 22:26:44.338668 systemd-journald[1148]: Runtime Journal (/run/log/journal/d1880ddd92f84ddc91f129cd0f5e2675) is 6.0M, max 48.3M, 42.2M free. Jul 14 22:26:44.341084 kernel: fuse: init (API version 7.39) Jul 14 22:26:44.341146 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 14 22:26:44.346743 kernel: loop: module loaded Jul 14 22:26:44.348990 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 14 22:26:44.350971 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:26:44.355240 systemd[1]: Started systemd-journald.service - Journal Service. Jul 14 22:26:44.356932 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 14 22:26:44.358454 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 14 22:26:44.384228 systemd[1]: Mounted media.mount - External Media Directory. Jul 14 22:26:44.385787 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 14 22:26:44.387501 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 14 22:26:44.389150 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 14 22:26:44.390990 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 14 22:26:44.393035 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 14 22:26:44.393573 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 14 22:26:44.395639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:26:44.396054 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:26:44.397918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:26:44.398311 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:26:44.400379 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 14 22:26:44.400614 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 14 22:26:44.401927 kernel: ACPI: bus type drm_connector registered Jul 14 22:26:44.402756 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:26:44.403009 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:26:44.405352 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:26:44.405685 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:26:44.408546 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 14 22:26:44.410763 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 14 22:26:44.412979 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 14 22:26:44.459492 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 14 22:26:44.473170 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 14 22:26:44.476257 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 14 22:26:44.477621 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 14 22:26:44.483123 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 14 22:26:44.486290 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 14 22:26:44.489249 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:26:44.497184 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 14 22:26:44.515984 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:26:44.519179 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 14 22:26:44.525165 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 14 22:26:44.534942 systemd-journald[1148]: Time spent on flushing to /var/log/journal/d1880ddd92f84ddc91f129cd0f5e2675 is 15.580ms for 979 entries. Jul 14 22:26:44.534942 systemd-journald[1148]: System Journal (/var/log/journal/d1880ddd92f84ddc91f129cd0f5e2675) is 8.0M, max 195.6M, 187.6M free. Jul 14 22:26:44.967663 systemd-journald[1148]: Received client request to flush runtime journal. Jul 14 22:26:44.532182 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 14 22:26:44.538565 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 14 22:26:44.540292 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 14 22:26:44.552334 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 14 22:26:44.567989 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 14 22:26:44.585143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 14 22:26:44.587526 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jul 14 22:26:44.587546 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Jul 14 22:26:44.595626 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 14 22:26:44.893205 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 14 22:26:44.903729 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 14 22:26:44.970328 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 14 22:26:45.090981 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 14 22:26:45.173302 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 14 22:26:45.209573 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 14 22:26:45.220056 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 14 22:26:45.243285 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jul 14 22:26:45.243314 systemd-tmpfiles[1230]: ACLs are not supported, ignoring. Jul 14 22:26:45.250836 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 14 22:26:46.512097 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 14 22:26:46.521363 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 14 22:26:46.570956 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Jul 14 22:26:46.594181 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 14 22:26:46.619157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 14 22:26:46.634097 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 14 22:26:46.650357 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 14 22:26:46.883410 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1239) Jul 14 22:26:46.930927 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 14 22:26:46.936919 kernel: ACPI: button: Power Button [PWRF] Jul 14 22:26:46.942230 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 14 22:26:46.959047 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 14 22:26:47.035687 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 14 22:26:47.037815 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 14 22:26:47.038008 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 14 22:26:47.038194 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 14 22:26:47.040916 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 14 22:26:47.069936 kernel: mousedev: PS/2 mouse device common for all mice Jul 14 22:26:47.072279 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:26:47.076428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 14 22:26:47.077954 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:26:47.090080 systemd-networkd[1240]: lo: Link UP Jul 14 22:26:47.090097 systemd-networkd[1240]: lo: Gained carrier Jul 14 22:26:47.091991 systemd-networkd[1240]: Enumeration completed Jul 14 22:26:47.092476 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:26:47.092489 systemd-networkd[1240]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 14 22:26:47.093531 systemd-networkd[1240]: eth0: Link UP Jul 14 22:26:47.093536 systemd-networkd[1240]: eth0: Gained carrier Jul 14 22:26:47.093550 systemd-networkd[1240]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 14 22:26:47.117097 systemd-networkd[1240]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 14 22:26:47.117669 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 14 22:26:47.120318 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 14 22:26:47.124760 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 14 22:26:47.238500 kernel: kvm_amd: TSC scaling supported Jul 14 22:26:47.238588 kernel: kvm_amd: Nested Virtualization enabled Jul 14 22:26:47.238602 kernel: kvm_amd: Nested Paging enabled Jul 14 22:26:47.239049 kernel: kvm_amd: LBR virtualization supported Jul 14 22:26:47.240333 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 14 22:26:47.240360 kernel: kvm_amd: Virtual GIF supported Jul 14 22:26:47.260931 kernel: EDAC MC: Ver: 3.0.0 Jul 14 22:26:47.269220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 14 22:26:47.289040 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 14 22:26:47.404178 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 14 22:26:47.416170 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:26:47.484678 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 14 22:26:47.510135 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 14 22:26:47.520094 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 14 22:26:47.572062 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 14 22:26:47.598915 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 14 22:26:47.600705 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 14 22:26:47.602224 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 14 22:26:47.602271 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 14 22:26:47.603529 systemd[1]: Reached target machines.target - Containers. Jul 14 22:26:47.606370 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 14 22:26:47.619170 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 14 22:26:47.623358 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 14 22:26:47.624840 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:26:47.626209 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 14 22:26:47.629299 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 14 22:26:47.637403 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 14 22:26:47.670699 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 14 22:26:47.861136 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 14 22:26:47.864967 kernel: loop0: detected capacity change from 0 to 140768 Jul 14 22:26:47.915953 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 14 22:26:47.943930 kernel: loop1: detected capacity change from 0 to 142488 Jul 14 22:26:47.985956 kernel: loop2: detected capacity change from 0 to 221472 Jul 14 22:26:48.108947 kernel: loop3: detected capacity change from 0 to 140768 Jul 14 22:26:48.122927 kernel: loop4: detected capacity change from 0 to 142488 Jul 14 22:26:48.134745 kernel: loop5: detected capacity change from 0 to 221472 Jul 14 22:26:48.141026 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 14 22:26:48.142067 (sd-merge)[1307]: Merged extensions into '/usr'. Jul 14 22:26:48.160304 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Jul 14 22:26:48.160320 systemd[1]: Reloading... Jul 14 22:26:48.254032 zram_generator::config[1336]: No configuration found. Jul 14 22:26:48.431585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:26:48.528060 systemd[1]: Reloading finished in 367 ms. Jul 14 22:26:48.625850 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 14 22:26:48.639268 systemd-networkd[1240]: eth0: Gained IPv6LL Jul 14 22:26:48.645552 systemd[1]: Starting ensure-sysext.service... Jul 14 22:26:48.648661 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 14 22:26:48.650623 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 14 22:26:48.662715 systemd[1]: Reloading requested from client PID 1378 ('systemctl') (unit ensure-sysext.service)... Jul 14 22:26:48.662738 systemd[1]: Reloading... Jul 14 22:26:48.733442 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 14 22:26:48.733977 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 14 22:26:48.735265 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 14 22:26:48.735571 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jul 14 22:26:48.735660 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jul 14 22:26:48.741637 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:26:48.741653 systemd-tmpfiles[1380]: Skipping /boot Jul 14 22:26:48.783933 zram_generator::config[1410]: No configuration found. Jul 14 22:26:48.832186 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jul 14 22:26:48.832205 systemd-tmpfiles[1380]: Skipping /boot Jul 14 22:26:48.868580 ldconfig[1293]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 14 22:26:48.993109 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:26:49.079625 systemd[1]: Reloading finished in 416 ms. Jul 14 22:26:49.107524 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 14 22:26:49.109315 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 14 22:26:49.177702 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 14 22:26:49.187506 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:26:49.267260 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 14 22:26:49.271516 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 14 22:26:49.276803 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 14 22:26:49.283210 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 14 22:26:49.291626 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:26:49.292329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:26:49.294820 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:26:49.366299 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:26:49.370244 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:26:49.373212 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:26:49.374169 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:26:49.377419 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 14 22:26:49.382009 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:26:49.382674 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:26:49.438250 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:26:49.438560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:26:49.441169 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 14 22:26:49.443579 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:26:49.443924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:26:49.451630 augenrules[1488]: No rules Jul 14 22:26:49.452447 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:26:49.469603 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 14 22:26:49.528360 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:26:49.528652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 14 22:26:49.538187 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 14 22:26:49.542074 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 14 22:26:49.546088 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 14 22:26:49.547933 systemd-resolved[1462]: Positive Trust Anchors: Jul 14 22:26:49.547953 systemd-resolved[1462]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 14 22:26:49.547995 systemd-resolved[1462]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 14 22:26:49.552951 systemd-resolved[1462]: Defaulting to hostname 'linux'. Jul 14 22:26:49.553432 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 14 22:26:49.594725 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 14 22:26:49.598097 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 14 22:26:49.599231 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 14 22:26:49.599861 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 14 22:26:49.601737 systemd[1]: Finished ensure-sysext.service. Jul 14 22:26:49.603119 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 14 22:26:49.605294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 14 22:26:49.605580 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 14 22:26:49.607431 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 14 22:26:49.607728 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 14 22:26:49.609449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 14 22:26:49.609687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 14 22:26:49.611552 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 14 22:26:49.611885 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 14 22:26:49.616860 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 14 22:26:49.685583 systemd[1]: Reached target network.target - Network. Jul 14 22:26:49.686761 systemd[1]: Reached target network-online.target - Network is Online. Jul 14 22:26:49.687915 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 14 22:26:49.689202 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 14 22:26:49.689299 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 14 22:26:49.703187 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 14 22:26:49.704424 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 14 22:26:49.783263 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 14 22:26:49.820696 systemd[1]: Reached target sysinit.target - System Initialization. Jul 14 22:26:50.972886 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 14 22:26:50.972937 systemd-resolved[1462]: Clock change detected. Flushing caches. Jul 14 22:26:50.974467 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 14 22:26:50.974481 systemd-timesyncd[1522]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 14 22:26:50.976043 systemd-timesyncd[1522]: Initial clock synchronization to Mon 2025-07-14 22:26:50.972815 UTC. Jul 14 22:26:50.976049 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 14 22:26:50.977576 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 14 22:26:50.977623 systemd[1]: Reached target paths.target - Path Units. Jul 14 22:26:50.978767 systemd[1]: Reached target time-set.target - System Time Set. Jul 14 22:26:50.980294 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 14 22:26:50.981709 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 14 22:26:50.983106 systemd[1]: Reached target timers.target - Timer Units. Jul 14 22:26:50.985403 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 14 22:26:50.988783 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 14 22:26:50.991430 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 14 22:26:50.995175 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 14 22:26:50.996486 systemd[1]: Reached target sockets.target - Socket Units. Jul 14 22:26:50.997632 systemd[1]: Reached target basic.target - Basic System. Jul 14 22:26:50.999005 systemd[1]: System is tainted: cgroupsv1 Jul 14 22:26:50.999062 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:26:50.999094 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 14 22:26:51.001106 systemd[1]: Starting containerd.service - containerd container runtime... Jul 14 22:26:51.004080 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 14 22:26:51.006934 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 14 22:26:51.009831 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 14 22:26:51.016616 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 14 22:26:51.018969 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 14 22:26:51.021587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:26:51.024504 jq[1529]: false Jul 14 22:26:51.025782 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 14 22:26:51.040068 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 14 22:26:51.043714 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 14 22:26:51.048690 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 14 22:26:51.051451 extend-filesystems[1532]: Found loop3 Jul 14 22:26:51.051451 extend-filesystems[1532]: Found loop4 Jul 14 22:26:51.051451 extend-filesystems[1532]: Found loop5 Jul 14 22:26:51.051451 extend-filesystems[1532]: Found sr0 Jul 14 22:26:51.051451 extend-filesystems[1532]: Found vda Jul 14 22:26:51.051451 extend-filesystems[1532]: Found vda1 Jul 14 22:26:51.051451 extend-filesystems[1532]: Found vda2 Jul 14 22:26:51.051451 extend-filesystems[1532]: Found vda3 Jul 14 22:26:51.051451 extend-filesystems[1532]: Found usr Jul 14 22:26:51.051451 extend-filesystems[1532]: Found vda4 Jul 14 22:26:51.051451 extend-filesystems[1532]: Found vda6 Jul 14 22:26:51.051451 extend-filesystems[1532]: Found vda7 Jul 14 22:26:51.051451 extend-filesystems[1532]: Found vda9 Jul 14 22:26:51.051451 extend-filesystems[1532]: Checking size of /dev/vda9 Jul 14 22:26:51.058105 dbus-daemon[1528]: [system] SELinux support is enabled Jul 14 22:26:51.055505 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 14 22:26:51.073752 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 14 22:26:51.093765 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 14 22:26:51.096776 systemd[1]: Starting update-engine.service - Update Engine... Jul 14 22:26:51.101062 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 14 22:26:51.103594 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 14 22:26:51.115933 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 14 22:26:51.116301 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 14 22:26:51.119740 systemd[1]: motdgen.service: Deactivated successfully. Jul 14 22:26:51.125124 jq[1561]: true Jul 14 22:26:51.120054 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 14 22:26:51.129691 update_engine[1560]: I20250714 22:26:51.125722 1560 main.cc:92] Flatcar Update Engine starting Jul 14 22:26:51.129691 update_engine[1560]: I20250714 22:26:51.127542 1560 update_check_scheduler.cc:74] Next update check in 11m32s Jul 14 22:26:51.144529 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 14 22:26:51.147949 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 14 22:26:51.148426 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 14 22:26:51.148474 extend-filesystems[1532]: Resized partition /dev/vda9 Jul 14 22:26:51.155729 extend-filesystems[1574]: resize2fs 1.47.1 (20-May-2024) Jul 14 22:26:51.169832 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 14 22:26:51.178605 jq[1573]: true Jul 14 22:26:51.175363 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 14 22:26:51.175723 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 14 22:26:51.194849 systemd-logind[1548]: Watching system buttons on /dev/input/event1 (Power Button) Jul 14 22:26:51.196425 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 14 22:26:51.196946 systemd-logind[1548]: New seat seat0. Jul 14 22:26:51.197051 systemd[1]: Started update-engine.service - Update Engine. Jul 14 22:26:51.199190 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 14 22:26:51.222541 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1597) Jul 14 22:26:51.267098 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 14 22:26:51.267401 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 14 22:26:51.289521 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 14 22:26:51.289577 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 14 22:26:51.292594 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 14 22:26:51.299980 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 14 22:26:51.303606 systemd[1]: Started systemd-logind.service - User Login Management. Jul 14 22:26:51.330948 tar[1572]: linux-amd64/helm Jul 14 22:26:51.363138 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 14 22:26:51.392080 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 14 22:26:51.402958 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 14 22:26:51.416202 systemd[1]: issuegen.service: Deactivated successfully. Jul 14 22:26:51.416635 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 14 22:26:51.450933 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 14 22:26:51.463766 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 14 22:26:51.750198 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 14 22:26:51.771785 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 14 22:26:51.774798 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 14 22:26:51.777086 systemd[1]: Reached target getty.target - Login Prompts. Jul 14 22:26:51.808182 tar[1572]: linux-amd64/LICENSE Jul 14 22:26:51.808332 tar[1572]: linux-amd64/README.md Jul 14 22:26:51.826010 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 14 22:26:51.914544 locksmithd[1621]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 14 22:26:52.128426 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 14 22:26:53.360298 containerd[1575]: time="2025-07-14T22:26:53.360111403Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 14 22:26:53.392028 containerd[1575]: time="2025-07-14T22:26:53.391893243Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:26:53.395037 containerd[1575]: time="2025-07-14T22:26:53.394804871Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.97-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:26:53.395037 containerd[1575]: time="2025-07-14T22:26:53.394854805Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 14 22:26:53.395037 containerd[1575]: time="2025-07-14T22:26:53.394882527Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 14 22:26:53.395394 containerd[1575]: time="2025-07-14T22:26:53.395364370Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 14 22:26:53.395447 containerd[1575]: time="2025-07-14T22:26:53.395398454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 14 22:26:53.395691 containerd[1575]: time="2025-07-14T22:26:53.395533878Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:26:53.395691 containerd[1575]: time="2025-07-14T22:26:53.395567721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:26:53.396023 containerd[1575]: time="2025-07-14T22:26:53.395975586Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:26:53.396023 containerd[1575]: time="2025-07-14T22:26:53.396004370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 14 22:26:53.396095 containerd[1575]: time="2025-07-14T22:26:53.396024347Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:26:53.396095 containerd[1575]: time="2025-07-14T22:26:53.396038744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 14 22:26:53.396222 containerd[1575]: time="2025-07-14T22:26:53.396198143Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:26:53.396882 containerd[1575]: time="2025-07-14T22:26:53.396842431Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 14 22:26:53.397160 containerd[1575]: time="2025-07-14T22:26:53.397113058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 14 22:26:53.397160 containerd[1575]: time="2025-07-14T22:26:53.397144507Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 14 22:26:53.397372 containerd[1575]: time="2025-07-14T22:26:53.397314115Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 14 22:26:53.398127 containerd[1575]: time="2025-07-14T22:26:53.397520592Z" level=info msg="metadata content store policy set" policy=shared Jul 14 22:26:53.469069 extend-filesystems[1574]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 14 22:26:53.469069 extend-filesystems[1574]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 14 22:26:53.469069 extend-filesystems[1574]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 14 22:26:53.475617 extend-filesystems[1532]: Resized filesystem in /dev/vda9 Jul 14 22:26:53.472294 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 14 22:26:53.472803 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 14 22:26:54.406563 bash[1619]: Updated "/home/core/.ssh/authorized_keys" Jul 14 22:26:54.409786 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 14 22:26:54.472404 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 14 22:26:54.601411 containerd[1575]: time="2025-07-14T22:26:54.601273699Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 14 22:26:54.601411 containerd[1575]: time="2025-07-14T22:26:54.601418611Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 14 22:26:54.602035 containerd[1575]: time="2025-07-14T22:26:54.601459698Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 14 22:26:54.602035 containerd[1575]: time="2025-07-14T22:26:54.601483623Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 14 22:26:54.602035 containerd[1575]: time="2025-07-14T22:26:54.601507938Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 14 22:26:54.602035 containerd[1575]: time="2025-07-14T22:26:54.601801128Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 14 22:26:54.602451 containerd[1575]: time="2025-07-14T22:26:54.602408016Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 14 22:26:54.602704 containerd[1575]: time="2025-07-14T22:26:54.602669296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 14 22:26:54.602704 containerd[1575]: time="2025-07-14T22:26:54.602694833Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 14 22:26:54.602774 containerd[1575]: time="2025-07-14T22:26:54.602712877Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 14 22:26:54.602774 containerd[1575]: time="2025-07-14T22:26:54.602734688Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 14 22:26:54.602774 containerd[1575]: time="2025-07-14T22:26:54.602759425Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 14 22:26:54.602851 containerd[1575]: time="2025-07-14T22:26:54.602782307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 14 22:26:54.602851 containerd[1575]: time="2025-07-14T22:26:54.602810089Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 14 22:26:54.602851 containerd[1575]: time="2025-07-14T22:26:54.602839374Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 14 22:26:54.602940 containerd[1575]: time="2025-07-14T22:26:54.602862608Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 14 22:26:54.602940 containerd[1575]: time="2025-07-14T22:26:54.602884469Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 14 22:26:54.602940 containerd[1575]: time="2025-07-14T22:26:54.602905238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 14 22:26:54.603024 containerd[1575]: time="2025-07-14T22:26:54.602948769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603024 containerd[1575]: time="2025-07-14T22:26:54.602971793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603024 containerd[1575]: time="2025-07-14T22:26:54.602988925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603024 containerd[1575]: time="2025-07-14T22:26:54.603005686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603024 containerd[1575]: time="2025-07-14T22:26:54.603023049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603194 containerd[1575]: time="2025-07-14T22:26:54.603040111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603194 containerd[1575]: time="2025-07-14T22:26:54.603068284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603194 containerd[1575]: time="2025-07-14T22:26:54.603089203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603194 containerd[1575]: time="2025-07-14T22:26:54.603111104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603194 containerd[1575]: time="2025-07-14T22:26:54.603142122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603194 containerd[1575]: time="2025-07-14T22:26:54.603160577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603194 containerd[1575]: time="2025-07-14T22:26:54.603182968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603402 containerd[1575]: time="2025-07-14T22:26:54.603202575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603402 containerd[1575]: time="2025-07-14T22:26:54.603226941Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 14 22:26:54.603402 containerd[1575]: time="2025-07-14T22:26:54.603266956Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603402 containerd[1575]: time="2025-07-14T22:26:54.603287364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603402 containerd[1575]: time="2025-07-14T22:26:54.603304847Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 14 22:26:54.603553 containerd[1575]: time="2025-07-14T22:26:54.603416727Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 14 22:26:54.603553 containerd[1575]: time="2025-07-14T22:26:54.603447615Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 14 22:26:54.603553 containerd[1575]: time="2025-07-14T22:26:54.603467963Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 14 22:26:54.603553 containerd[1575]: time="2025-07-14T22:26:54.603491196Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 14 22:26:54.603553 containerd[1575]: time="2025-07-14T22:26:54.603508509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.603553 containerd[1575]: time="2025-07-14T22:26:54.603530370Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 14 22:26:54.603727 containerd[1575]: time="2025-07-14T22:26:54.603563051Z" level=info msg="NRI interface is disabled by configuration." Jul 14 22:26:54.603727 containerd[1575]: time="2025-07-14T22:26:54.603579953Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 14 22:26:54.605017 containerd[1575]: time="2025-07-14T22:26:54.604905458Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 14 22:26:54.605017 containerd[1575]: time="2025-07-14T22:26:54.605003602Z" level=info msg="Connect containerd service" Jul 14 22:26:54.605263 containerd[1575]: time="2025-07-14T22:26:54.605085395Z" level=info msg="using legacy CRI server" Jul 14 22:26:54.605263 containerd[1575]: time="2025-07-14T22:26:54.605102898Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 14 22:26:54.605336 containerd[1575]: time="2025-07-14T22:26:54.605294206Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 14 22:26:54.606323 containerd[1575]: time="2025-07-14T22:26:54.606283441Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 14 22:26:54.606474 containerd[1575]: time="2025-07-14T22:26:54.606415368Z" level=info msg="Start subscribing containerd event" Jul 14 22:26:54.606526 containerd[1575]: time="2025-07-14T22:26:54.606500007Z" level=info msg="Start recovering state" Jul 14 22:26:54.606605 containerd[1575]: time="2025-07-14T22:26:54.606589354Z" level=info msg="Start event monitor" Jul 14 22:26:54.606696 containerd[1575]: time="2025-07-14T22:26:54.606611466Z" level=info msg="Start snapshots syncer" Jul 14 22:26:54.606696 containerd[1575]: time="2025-07-14T22:26:54.606628488Z" level=info msg="Start cni network conf syncer for default" Jul 14 22:26:54.606696 containerd[1575]: time="2025-07-14T22:26:54.606638607Z" level=info msg="Start streaming server" Jul 14 22:26:54.606899 containerd[1575]: time="2025-07-14T22:26:54.606875080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 14 22:26:54.606979 containerd[1575]: time="2025-07-14T22:26:54.606952766Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 14 22:26:54.607093 containerd[1575]: time="2025-07-14T22:26:54.607065807Z" level=info msg="containerd successfully booted in 1.808740s" Jul 14 22:26:54.607243 systemd[1]: Started containerd.service - containerd container runtime. Jul 14 22:26:55.468950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:26:55.471070 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 14 22:26:55.473252 systemd[1]: Startup finished in 23.669s (kernel) + 11.439s (userspace) = 35.108s. Jul 14 22:26:55.475768 (kubelet)[1675]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:26:56.354050 kubelet[1675]: E0714 22:26:56.353949 1675 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:26:56.358826 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:26:56.359207 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:26:59.202691 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 14 22:26:59.215661 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:43538.service - OpenSSH per-connection server daemon (10.0.0.1:43538). Jul 14 22:26:59.261517 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 43538 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:26:59.263755 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:26:59.273555 systemd-logind[1548]: New session 1 of user core. Jul 14 22:26:59.274883 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 14 22:26:59.283623 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 14 22:26:59.301703 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 14 22:26:59.312865 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 14 22:26:59.316405 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 14 22:26:59.499422 systemd[1695]: Queued start job for default target default.target. Jul 14 22:26:59.499978 systemd[1695]: Created slice app.slice - User Application Slice. Jul 14 22:26:59.500014 systemd[1695]: Reached target paths.target - Paths. Jul 14 22:26:59.500046 systemd[1695]: Reached target timers.target - Timers. Jul 14 22:26:59.509768 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 14 22:26:59.519407 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 14 22:26:59.519518 systemd[1695]: Reached target sockets.target - Sockets. Jul 14 22:26:59.519541 systemd[1695]: Reached target basic.target - Basic System. Jul 14 22:26:59.519607 systemd[1695]: Reached target default.target - Main User Target. Jul 14 22:26:59.519661 systemd[1695]: Startup finished in 195ms. Jul 14 22:26:59.521767 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 14 22:26:59.531986 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 14 22:26:59.599845 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:43546.service - OpenSSH per-connection server daemon (10.0.0.1:43546). Jul 14 22:26:59.639419 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 43546 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:26:59.641336 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:26:59.647440 systemd-logind[1548]: New session 2 of user core. Jul 14 22:26:59.657915 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 14 22:26:59.715533 sshd[1707]: pam_unix(sshd:session): session closed for user core Jul 14 22:26:59.724623 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:43558.service - OpenSSH per-connection server daemon (10.0.0.1:43558). Jul 14 22:26:59.725112 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:43546.service: Deactivated successfully. Jul 14 22:26:59.727991 systemd-logind[1548]: Session 2 logged out. Waiting for processes to exit. Jul 14 22:26:59.728942 systemd[1]: session-2.scope: Deactivated successfully. Jul 14 22:26:59.730516 systemd-logind[1548]: Removed session 2. Jul 14 22:26:59.760928 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 43558 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:26:59.762872 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:26:59.769862 systemd-logind[1548]: New session 3 of user core. Jul 14 22:26:59.784700 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 14 22:26:59.837882 sshd[1712]: pam_unix(sshd:session): session closed for user core Jul 14 22:26:59.846595 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:43562.service - OpenSSH per-connection server daemon (10.0.0.1:43562). Jul 14 22:26:59.847103 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:43558.service: Deactivated successfully. Jul 14 22:26:59.850139 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Jul 14 22:26:59.851968 systemd[1]: session-3.scope: Deactivated successfully. Jul 14 22:26:59.852765 systemd-logind[1548]: Removed session 3. Jul 14 22:26:59.882669 sshd[1720]: Accepted publickey for core from 10.0.0.1 port 43562 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:26:59.885970 sshd[1720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:26:59.890440 systemd-logind[1548]: New session 4 of user core. Jul 14 22:26:59.904917 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 14 22:26:59.962330 sshd[1720]: pam_unix(sshd:session): session closed for user core Jul 14 22:26:59.976622 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:43566.service - OpenSSH per-connection server daemon (10.0.0.1:43566). Jul 14 22:26:59.977278 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:43562.service: Deactivated successfully. Jul 14 22:26:59.980513 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Jul 14 22:26:59.981962 systemd[1]: session-4.scope: Deactivated successfully. Jul 14 22:26:59.983278 systemd-logind[1548]: Removed session 4. Jul 14 22:27:00.013308 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 43566 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:27:00.015269 sshd[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:27:00.020763 systemd-logind[1548]: New session 5 of user core. Jul 14 22:27:00.031801 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 14 22:27:00.100165 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 14 22:27:00.100602 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:27:00.126061 sudo[1735]: pam_unix(sudo:session): session closed for user root Jul 14 22:27:00.129094 sshd[1728]: pam_unix(sshd:session): session closed for user core Jul 14 22:27:00.138843 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:43578.service - OpenSSH per-connection server daemon (10.0.0.1:43578). Jul 14 22:27:00.139605 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:43566.service: Deactivated successfully. Jul 14 22:27:00.143223 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Jul 14 22:27:00.143961 systemd[1]: session-5.scope: Deactivated successfully. Jul 14 22:27:00.146128 systemd-logind[1548]: Removed session 5. Jul 14 22:27:00.181716 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 43578 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:27:00.183767 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:27:00.192181 systemd-logind[1548]: New session 6 of user core. Jul 14 22:27:00.203021 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 14 22:27:00.265594 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 14 22:27:00.266079 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:27:00.271490 sudo[1745]: pam_unix(sudo:session): session closed for user root Jul 14 22:27:00.281331 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 14 22:27:00.281806 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:27:00.310293 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 14 22:27:00.313163 auditctl[1748]: No rules Jul 14 22:27:00.314060 systemd[1]: audit-rules.service: Deactivated successfully. Jul 14 22:27:00.314538 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 14 22:27:00.318254 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 14 22:27:00.371564 augenrules[1767]: No rules Jul 14 22:27:00.373742 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 14 22:27:00.375295 sudo[1744]: pam_unix(sudo:session): session closed for user root Jul 14 22:27:00.377987 sshd[1737]: pam_unix(sshd:session): session closed for user core Jul 14 22:27:00.395728 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:43588.service - OpenSSH per-connection server daemon (10.0.0.1:43588). Jul 14 22:27:00.396442 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:43578.service: Deactivated successfully. Jul 14 22:27:00.398634 systemd[1]: session-6.scope: Deactivated successfully. Jul 14 22:27:00.399272 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Jul 14 22:27:00.400759 systemd-logind[1548]: Removed session 6. Jul 14 22:27:00.430240 sshd[1773]: Accepted publickey for core from 10.0.0.1 port 43588 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:27:00.432183 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:27:00.436931 systemd-logind[1548]: New session 7 of user core. Jul 14 22:27:00.450695 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 14 22:27:00.505935 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 14 22:27:00.506291 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 14 22:27:01.220835 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 14 22:27:01.221194 (dockerd)[1799]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 14 22:27:01.823403 dockerd[1799]: time="2025-07-14T22:27:01.823292014Z" level=info msg="Starting up" Jul 14 22:27:04.141031 dockerd[1799]: time="2025-07-14T22:27:04.140930869Z" level=info msg="Loading containers: start." Jul 14 22:27:04.900387 kernel: Initializing XFRM netlink socket Jul 14 22:27:04.991368 systemd-networkd[1240]: docker0: Link UP Jul 14 22:27:05.159827 dockerd[1799]: time="2025-07-14T22:27:05.159653090Z" level=info msg="Loading containers: done." Jul 14 22:27:05.177833 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3025041298-merged.mount: Deactivated successfully. Jul 14 22:27:05.349048 dockerd[1799]: time="2025-07-14T22:27:05.348913966Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 14 22:27:05.349232 dockerd[1799]: time="2025-07-14T22:27:05.349109352Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 14 22:27:05.349272 dockerd[1799]: time="2025-07-14T22:27:05.349259334Z" level=info msg="Daemon has completed initialization" Jul 14 22:27:06.185059 dockerd[1799]: time="2025-07-14T22:27:06.184928164Z" level=info msg="API listen on /run/docker.sock" Jul 14 22:27:06.185390 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 14 22:27:06.526742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 14 22:27:06.539740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:27:06.832450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:27:06.839008 (kubelet)[1956]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:27:06.896538 kubelet[1956]: E0714 22:27:06.896311 1956 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:27:06.905069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:27:06.905457 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:27:12.815411 containerd[1575]: time="2025-07-14T22:27:12.815363092Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 14 22:27:17.027026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 14 22:27:17.134690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:27:17.400627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:27:17.406905 (kubelet)[1981]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:27:17.499012 kubelet[1981]: E0714 22:27:17.498887 1981 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:27:17.504142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:27:17.504555 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:27:27.527262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 14 22:27:27.541785 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:27:27.749030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:27:27.755552 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:27:28.100859 kubelet[2003]: E0714 22:27:28.100731 2003 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:27:28.105816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:27:28.106182 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:27:28.603800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount440293430.mount: Deactivated successfully. Jul 14 22:27:35.482866 containerd[1575]: time="2025-07-14T22:27:35.482757683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:35.528497 containerd[1575]: time="2025-07-14T22:27:35.528369205Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 14 22:27:35.579317 containerd[1575]: time="2025-07-14T22:27:35.579242055Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:35.661188 containerd[1575]: time="2025-07-14T22:27:35.661106475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:35.662535 containerd[1575]: time="2025-07-14T22:27:35.662471271Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 22.84706098s" Jul 14 22:27:35.662535 containerd[1575]: time="2025-07-14T22:27:35.662531105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 14 22:27:35.663777 containerd[1575]: time="2025-07-14T22:27:35.663541195Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 14 22:27:36.381999 update_engine[1560]: I20250714 22:27:36.381896 1560 update_attempter.cc:509] Updating boot flags... Jul 14 22:27:36.544393 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2072) Jul 14 22:27:36.644515 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2075) Jul 14 22:27:36.732658 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2075) Jul 14 22:27:38.276665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 14 22:27:38.286542 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:27:38.469074 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:27:38.475269 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:27:39.434091 kubelet[2093]: E0714 22:27:39.433949 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:27:39.438735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:27:39.439062 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:27:42.294238 containerd[1575]: time="2025-07-14T22:27:42.294127217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:42.297755 containerd[1575]: time="2025-07-14T22:27:42.297685417Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 14 22:27:42.302529 containerd[1575]: time="2025-07-14T22:27:42.302220530Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:42.310698 containerd[1575]: time="2025-07-14T22:27:42.310415988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:42.312153 containerd[1575]: time="2025-07-14T22:27:42.312049456Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 6.648464268s" Jul 14 22:27:42.312153 containerd[1575]: time="2025-07-14T22:27:42.312102738Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 14 22:27:42.313448 containerd[1575]: time="2025-07-14T22:27:42.313408054Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 14 22:27:46.549579 containerd[1575]: time="2025-07-14T22:27:46.549501360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:46.744795 containerd[1575]: time="2025-07-14T22:27:46.744682604Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 14 22:27:46.876539 containerd[1575]: time="2025-07-14T22:27:46.876337513Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:47.040745 containerd[1575]: time="2025-07-14T22:27:47.040637336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:47.042183 containerd[1575]: time="2025-07-14T22:27:47.042090583Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 4.728544718s" Jul 14 22:27:47.042183 containerd[1575]: time="2025-07-14T22:27:47.042151889Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 14 22:27:47.042832 containerd[1575]: time="2025-07-14T22:27:47.042796088Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 14 22:27:49.526838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 14 22:27:49.544707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:27:50.463642 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:27:50.471078 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:27:50.531917 kubelet[2123]: E0714 22:27:50.531817 2123 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:27:50.537504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:27:50.537867 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:27:52.979267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2145957892.mount: Deactivated successfully. Jul 14 22:27:54.584120 containerd[1575]: time="2025-07-14T22:27:54.584035121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:54.624758 containerd[1575]: time="2025-07-14T22:27:54.624616336Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 14 22:27:54.647887 containerd[1575]: time="2025-07-14T22:27:54.647814393Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:54.667591 containerd[1575]: time="2025-07-14T22:27:54.667503988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:27:54.668424 containerd[1575]: time="2025-07-14T22:27:54.668328943Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 7.625487329s" Jul 14 22:27:54.668424 containerd[1575]: time="2025-07-14T22:27:54.668409604Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 14 22:27:54.668982 containerd[1575]: time="2025-07-14T22:27:54.668942660Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 14 22:28:00.307646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1390924214.mount: Deactivated successfully. Jul 14 22:28:00.776672 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 14 22:28:00.790798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:28:00.968371 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:28:00.974895 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:28:01.245495 kubelet[2157]: E0714 22:28:01.245282 2157 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:28:01.249808 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:28:01.250120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:28:05.671736 containerd[1575]: time="2025-07-14T22:28:05.671639951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:05.672571 containerd[1575]: time="2025-07-14T22:28:05.672488796Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 14 22:28:05.674506 containerd[1575]: time="2025-07-14T22:28:05.674457847Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:05.679384 containerd[1575]: time="2025-07-14T22:28:05.679306261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:05.680858 containerd[1575]: time="2025-07-14T22:28:05.680806131Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 11.011827103s" Jul 14 22:28:05.680858 containerd[1575]: time="2025-07-14T22:28:05.680857417Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 14 22:28:05.681627 containerd[1575]: time="2025-07-14T22:28:05.681569094Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 14 22:28:11.241708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055642542.mount: Deactivated successfully. Jul 14 22:28:11.276781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 14 22:28:11.290752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:28:11.484013 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:28:11.500827 (kubelet)[2231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:28:11.539985 kubelet[2231]: E0714 22:28:11.539894 2231 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:28:11.544991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:28:11.545382 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:28:13.124160 containerd[1575]: time="2025-07-14T22:28:13.124091021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:13.139381 containerd[1575]: time="2025-07-14T22:28:13.139302472Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 14 22:28:13.171414 containerd[1575]: time="2025-07-14T22:28:13.171277242Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:13.187501 containerd[1575]: time="2025-07-14T22:28:13.187410915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:13.188458 containerd[1575]: time="2025-07-14T22:28:13.188394141Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 7.506783488s" Jul 14 22:28:13.188520 containerd[1575]: time="2025-07-14T22:28:13.188456608Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 14 22:28:13.189111 containerd[1575]: time="2025-07-14T22:28:13.189076752Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 14 22:28:15.453742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165079407.mount: Deactivated successfully. Jul 14 22:28:20.639233 containerd[1575]: time="2025-07-14T22:28:20.639139268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:20.640605 containerd[1575]: time="2025-07-14T22:28:20.640549726Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 14 22:28:20.642694 containerd[1575]: time="2025-07-14T22:28:20.642630781Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:20.646588 containerd[1575]: time="2025-07-14T22:28:20.646539515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:20.647978 containerd[1575]: time="2025-07-14T22:28:20.647936868Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 7.458811455s" Jul 14 22:28:20.648055 containerd[1575]: time="2025-07-14T22:28:20.647990188Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 14 22:28:21.776848 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jul 14 22:28:21.784745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:28:22.006957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:28:22.012022 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:28:22.060414 kubelet[2315]: E0714 22:28:22.059130 2315 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:28:22.064740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:28:22.065136 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:28:30.806082 containerd[1575]: time="2025-07-14T22:28:30.806038578Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Jul 14 22:28:32.161391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jul 14 22:28:32.167507 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:28:32.171775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount637969515.mount: Deactivated successfully. Jul 14 22:28:32.349287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:28:32.377971 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:28:32.420458 kubelet[2355]: E0714 22:28:32.420248 2355 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:28:32.425192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:28:32.425525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:28:38.976490 containerd[1575]: time="2025-07-14T22:28:38.976391668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:39.089184 containerd[1575]: time="2025-07-14T22:28:39.088901533Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27813982" Jul 14 22:28:39.159805 containerd[1575]: time="2025-07-14T22:28:39.159719632Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:39.226810 containerd[1575]: time="2025-07-14T22:28:39.226598645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:39.228253 containerd[1575]: time="2025-07-14T22:28:39.228200156Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 8.422107884s" Jul 14 22:28:39.228253 containerd[1575]: time="2025-07-14T22:28:39.228259160Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Jul 14 22:28:39.229249 containerd[1575]: time="2025-07-14T22:28:39.229223676Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Jul 14 22:28:42.526746 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Jul 14 22:28:42.541590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:28:43.349693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:28:43.356101 (kubelet)[2424]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 14 22:28:43.399530 kubelet[2424]: E0714 22:28:43.399454 2424 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 14 22:28:43.404404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 14 22:28:43.404773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 14 22:28:45.898585 containerd[1575]: time="2025-07-14T22:28:45.898475539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:45.916111 containerd[1575]: time="2025-07-14T22:28:45.916022832Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713776" Jul 14 22:28:45.925020 containerd[1575]: time="2025-07-14T22:28:45.924948880Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:45.931577 containerd[1575]: time="2025-07-14T22:28:45.931508272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:45.932963 containerd[1575]: time="2025-07-14T22:28:45.932920179Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 6.703659181s" Jul 14 22:28:45.933054 containerd[1575]: time="2025-07-14T22:28:45.932988390Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Jul 14 22:28:45.934293 containerd[1575]: time="2025-07-14T22:28:45.934246842Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Jul 14 22:28:47.413202 containerd[1575]: time="2025-07-14T22:28:47.413114076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:47.414278 containerd[1575]: time="2025-07-14T22:28:47.414182020Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780386" Jul 14 22:28:47.416537 containerd[1575]: time="2025-07-14T22:28:47.416384628Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:47.420949 containerd[1575]: time="2025-07-14T22:28:47.420890956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:47.422508 containerd[1575]: time="2025-07-14T22:28:47.422359998Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 1.488037942s" Jul 14 22:28:47.422508 containerd[1575]: time="2025-07-14T22:28:47.422400936Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Jul 14 22:28:47.423941 containerd[1575]: time="2025-07-14T22:28:47.423898534Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Jul 14 22:28:48.742622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731577632.mount: Deactivated successfully. Jul 14 22:28:50.433699 containerd[1575]: time="2025-07-14T22:28:50.433575927Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:50.436227 containerd[1575]: time="2025-07-14T22:28:50.436103839Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354625" Jul 14 22:28:50.443154 containerd[1575]: time="2025-07-14T22:28:50.442949816Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:50.446413 containerd[1575]: time="2025-07-14T22:28:50.446280563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:28:50.447065 containerd[1575]: time="2025-07-14T22:28:50.446970683Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 3.023020619s" Jul 14 22:28:50.447065 containerd[1575]: time="2025-07-14T22:28:50.447009497Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Jul 14 22:28:52.329399 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:28:52.340570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:28:52.370884 systemd[1]: Reloading requested from client PID 2457 ('systemctl') (unit session-7.scope)... Jul 14 22:28:52.370903 systemd[1]: Reloading... Jul 14 22:28:52.483666 zram_generator::config[2496]: No configuration found. Jul 14 22:28:52.930247 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:28:53.038035 systemd[1]: Reloading finished in 666 ms. Jul 14 22:28:53.090558 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 14 22:28:53.090694 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 14 22:28:53.091154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:28:53.093243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:28:53.277747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:28:53.283382 (kubelet)[2556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:28:53.378179 kubelet[2556]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:28:53.378179 kubelet[2556]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:28:53.378179 kubelet[2556]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:28:53.378673 kubelet[2556]: I0714 22:28:53.378263 2556 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:28:53.778070 kubelet[2556]: I0714 22:28:53.778015 2556 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:28:53.778070 kubelet[2556]: I0714 22:28:53.778058 2556 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:28:53.778357 kubelet[2556]: I0714 22:28:53.778328 2556 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:28:53.909313 kubelet[2556]: E0714 22:28:53.909246 2556 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:53.910105 kubelet[2556]: I0714 22:28:53.910079 2556 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:28:53.976072 kubelet[2556]: E0714 22:28:53.975993 2556 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:28:53.976072 kubelet[2556]: I0714 22:28:53.976063 2556 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:28:53.986834 kubelet[2556]: I0714 22:28:53.986719 2556 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:28:53.987926 kubelet[2556]: I0714 22:28:53.987879 2556 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:28:53.988214 kubelet[2556]: I0714 22:28:53.988141 2556 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:28:53.988456 kubelet[2556]: I0714 22:28:53.988191 2556 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:28:53.988592 kubelet[2556]: I0714 22:28:53.988470 2556 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:28:53.988592 kubelet[2556]: I0714 22:28:53.988484 2556 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:28:53.988696 kubelet[2556]: I0714 22:28:53.988667 2556 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:28:53.991130 kubelet[2556]: I0714 22:28:53.991097 2556 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:28:53.991193 kubelet[2556]: I0714 22:28:53.991134 2556 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:28:53.991220 kubelet[2556]: I0714 22:28:53.991197 2556 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:28:53.991269 kubelet[2556]: I0714 22:28:53.991249 2556 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:28:53.994859 kubelet[2556]: W0714 22:28:53.994777 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:53.994859 kubelet[2556]: W0714 22:28:53.994822 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:53.994859 kubelet[2556]: E0714 22:28:53.994850 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:53.995103 kubelet[2556]: E0714 22:28:53.994883 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:54.002400 kubelet[2556]: I0714 22:28:54.002318 2556 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:28:54.002914 kubelet[2556]: I0714 22:28:54.002883 2556 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:28:54.003013 kubelet[2556]: W0714 22:28:54.002995 2556 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 14 22:28:54.006866 kubelet[2556]: I0714 22:28:54.006822 2556 server.go:1274] "Started kubelet" Jul 14 22:28:54.007203 kubelet[2556]: I0714 22:28:54.007149 2556 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:28:54.008626 kubelet[2556]: I0714 22:28:54.007646 2556 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:28:54.008626 kubelet[2556]: I0714 22:28:54.007720 2556 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:28:54.008626 kubelet[2556]: I0714 22:28:54.008318 2556 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:28:54.008764 kubelet[2556]: I0714 22:28:54.008731 2556 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:28:54.010830 kubelet[2556]: I0714 22:28:54.009494 2556 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:28:54.012518 kubelet[2556]: I0714 22:28:54.011531 2556 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:28:54.012518 kubelet[2556]: I0714 22:28:54.011647 2556 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:28:54.012518 kubelet[2556]: I0714 22:28:54.011721 2556 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:28:54.012518 kubelet[2556]: W0714 22:28:54.012108 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:54.012518 kubelet[2556]: E0714 22:28:54.012146 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:54.013330 kubelet[2556]: E0714 22:28:54.013216 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:54.021177 kubelet[2556]: E0714 22:28:54.020287 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Jul 14 22:28:54.021177 kubelet[2556]: I0714 22:28:54.020631 2556 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:28:54.021177 kubelet[2556]: I0714 22:28:54.020721 2556 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:28:54.021177 kubelet[2556]: E0714 22:28:54.019516 2556 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523eb0c8ee847d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:28:54.006785149 +0000 UTC m=+0.716008465,LastTimestamp:2025-07-14 22:28:54.006785149 +0000 UTC m=+0.716008465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:28:54.022262 kubelet[2556]: E0714 22:28:54.021787 2556 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:28:54.022316 kubelet[2556]: I0714 22:28:54.022299 2556 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:28:54.046960 kubelet[2556]: I0714 22:28:54.044677 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:28:54.046960 kubelet[2556]: I0714 22:28:54.046824 2556 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:28:54.046960 kubelet[2556]: I0714 22:28:54.046862 2556 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:28:54.046960 kubelet[2556]: I0714 22:28:54.046894 2556 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:28:54.047149 kubelet[2556]: E0714 22:28:54.046957 2556 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:28:54.048301 kubelet[2556]: W0714 22:28:54.048276 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:54.048416 kubelet[2556]: E0714 22:28:54.048312 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:54.049949 kubelet[2556]: I0714 22:28:54.049902 2556 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:28:54.049949 kubelet[2556]: I0714 22:28:54.049919 2556 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:28:54.049949 kubelet[2556]: I0714 22:28:54.049950 2556 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:28:54.114376 kubelet[2556]: E0714 22:28:54.114293 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:54.147945 kubelet[2556]: E0714 22:28:54.147816 2556 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:28:54.214822 kubelet[2556]: E0714 22:28:54.214734 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:54.221869 kubelet[2556]: E0714 22:28:54.221795 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Jul 14 22:28:54.315485 kubelet[2556]: E0714 22:28:54.315237 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:54.348570 kubelet[2556]: E0714 22:28:54.348491 2556 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:28:54.416288 kubelet[2556]: E0714 22:28:54.416228 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:54.516407 kubelet[2556]: E0714 22:28:54.516319 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:54.617169 kubelet[2556]: E0714 22:28:54.616999 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:54.622688 kubelet[2556]: E0714 22:28:54.622649 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Jul 14 22:28:54.717182 kubelet[2556]: E0714 22:28:54.717125 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:54.749442 kubelet[2556]: E0714 22:28:54.749338 2556 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:28:54.818097 kubelet[2556]: E0714 22:28:54.818021 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:54.919234 kubelet[2556]: E0714 22:28:54.919037 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:55.020295 kubelet[2556]: E0714 22:28:55.020179 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:55.121458 kubelet[2556]: E0714 22:28:55.121383 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:55.121990 kubelet[2556]: W0714 22:28:55.121854 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:55.122050 kubelet[2556]: E0714 22:28:55.121999 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:55.222407 kubelet[2556]: E0714 22:28:55.222158 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:55.323078 kubelet[2556]: E0714 22:28:55.323008 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:55.327812 kubelet[2556]: W0714 22:28:55.327745 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:55.327889 kubelet[2556]: E0714 22:28:55.327818 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:55.344033 kubelet[2556]: W0714 22:28:55.343908 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:55.344033 kubelet[2556]: E0714 22:28:55.344011 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:55.353889 kubelet[2556]: W0714 22:28:55.353822 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:55.353889 kubelet[2556]: E0714 22:28:55.353872 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:55.421462 kubelet[2556]: E0714 22:28:55.421279 2556 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18523eb0c8ee847d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-14 22:28:54.006785149 +0000 UTC m=+0.716008465,LastTimestamp:2025-07-14 22:28:54.006785149 +0000 UTC m=+0.716008465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 14 22:28:55.423497 kubelet[2556]: E0714 22:28:55.423431 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:55.423850 kubelet[2556]: E0714 22:28:55.423786 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="1.6s" Jul 14 22:28:55.523898 kubelet[2556]: E0714 22:28:55.523725 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:55.550043 kubelet[2556]: E0714 22:28:55.549970 2556 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:28:55.624717 kubelet[2556]: E0714 22:28:55.624657 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:55.725542 kubelet[2556]: E0714 22:28:55.725475 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:55.826326 kubelet[2556]: E0714 22:28:55.826250 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:55.927127 kubelet[2556]: E0714 22:28:55.927071 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:56.028003 kubelet[2556]: E0714 22:28:56.027927 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:56.070547 kubelet[2556]: E0714 22:28:56.070481 2556 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:56.128522 kubelet[2556]: E0714 22:28:56.128335 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:56.229199 kubelet[2556]: E0714 22:28:56.229134 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:56.329906 kubelet[2556]: E0714 22:28:56.329820 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:56.431281 kubelet[2556]: E0714 22:28:56.431033 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:56.531563 kubelet[2556]: E0714 22:28:56.531457 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:56.632402 kubelet[2556]: E0714 22:28:56.632300 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:56.734102 kubelet[2556]: E0714 22:28:56.733280 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:56.834372 kubelet[2556]: E0714 22:28:56.834284 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:56.935525 kubelet[2556]: E0714 22:28:56.935427 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:57.024916 kubelet[2556]: E0714 22:28:57.024706 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="3.2s" Jul 14 22:28:57.036010 kubelet[2556]: E0714 22:28:57.035904 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:57.136610 kubelet[2556]: E0714 22:28:57.136512 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:57.150829 kubelet[2556]: E0714 22:28:57.150717 2556 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 14 22:28:57.154887 kubelet[2556]: W0714 22:28:57.154767 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:57.155072 kubelet[2556]: E0714 22:28:57.154932 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:57.237123 kubelet[2556]: E0714 22:28:57.237005 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:57.241764 kubelet[2556]: W0714 22:28:57.241711 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:57.241764 kubelet[2556]: E0714 22:28:57.241755 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:57.337785 kubelet[2556]: E0714 22:28:57.337652 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:57.390486 kubelet[2556]: W0714 22:28:57.390420 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:57.390486 kubelet[2556]: E0714 22:28:57.390481 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:57.438788 kubelet[2556]: E0714 22:28:57.438692 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:28:57.453825 kubelet[2556]: I0714 22:28:57.453747 2556 policy_none.go:49] "None policy: Start" Jul 14 22:28:57.454809 kubelet[2556]: I0714 22:28:57.454766 2556 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:28:57.454809 kubelet[2556]: I0714 22:28:57.454810 2556 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:28:57.501686 kubelet[2556]: I0714 22:28:57.501635 2556 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:28:57.501907 kubelet[2556]: I0714 22:28:57.501890 2556 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:28:57.502500 kubelet[2556]: I0714 22:28:57.501911 2556 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:28:57.502500 kubelet[2556]: I0714 22:28:57.502219 2556 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:28:57.504042 kubelet[2556]: E0714 22:28:57.503999 2556 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:28:57.604800 kubelet[2556]: I0714 22:28:57.604606 2556 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:28:57.605243 kubelet[2556]: E0714 22:28:57.605173 2556 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jul 14 22:28:57.806905 kubelet[2556]: I0714 22:28:57.806860 2556 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:28:57.807420 kubelet[2556]: E0714 22:28:57.807372 2556 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jul 14 22:28:58.209214 kubelet[2556]: I0714 22:28:58.209170 2556 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:28:58.209645 kubelet[2556]: E0714 22:28:58.209596 2556 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jul 14 22:28:58.505201 kubelet[2556]: W0714 22:28:58.505058 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:28:58.505201 kubelet[2556]: E0714 22:28:58.505138 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:28:59.011780 kubelet[2556]: I0714 22:28:59.011743 2556 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:28:59.012146 kubelet[2556]: E0714 22:28:59.012080 2556 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jul 14 22:29:00.098172 kubelet[2556]: E0714 22:29:00.098115 2556 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:29:00.225519 kubelet[2556]: E0714 22:29:00.225454 2556 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="6.4s" Jul 14 22:29:00.450069 kubelet[2556]: W0714 22:29:00.449871 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:29:00.450069 kubelet[2556]: E0714 22:29:00.449974 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:29:00.455507 kubelet[2556]: I0714 22:29:00.455444 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:29:00.455507 kubelet[2556]: I0714 22:29:00.455499 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/effe176f2fdb2d03895d5aeb7e1ad64a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"effe176f2fdb2d03895d5aeb7e1ad64a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:29:00.455622 kubelet[2556]: I0714 22:29:00.455519 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:29:00.455622 kubelet[2556]: I0714 22:29:00.455540 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:29:00.455622 kubelet[2556]: I0714 22:29:00.455556 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/effe176f2fdb2d03895d5aeb7e1ad64a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"effe176f2fdb2d03895d5aeb7e1ad64a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:29:00.455622 kubelet[2556]: I0714 22:29:00.455571 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/effe176f2fdb2d03895d5aeb7e1ad64a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"effe176f2fdb2d03895d5aeb7e1ad64a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:29:00.455622 kubelet[2556]: I0714 22:29:00.455586 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:29:00.455774 kubelet[2556]: I0714 22:29:00.455605 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:29:00.455774 kubelet[2556]: I0714 22:29:00.455620 2556 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:29:00.557048 kubelet[2556]: W0714 22:29:00.556960 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:29:00.557048 kubelet[2556]: E0714 22:29:00.557039 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:29:00.613849 kubelet[2556]: I0714 22:29:00.613802 2556 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:29:00.614301 kubelet[2556]: E0714 22:29:00.614253 2556 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Jul 14 22:29:00.656737 kubelet[2556]: E0714 22:29:00.656656 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:00.657641 containerd[1575]: time="2025-07-14T22:29:00.657599520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:effe176f2fdb2d03895d5aeb7e1ad64a,Namespace:kube-system,Attempt:0,}" Jul 14 22:29:00.658767 kubelet[2556]: E0714 22:29:00.658734 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:00.658767 kubelet[2556]: E0714 22:29:00.658771 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:00.665105 containerd[1575]: time="2025-07-14T22:29:00.665025937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" Jul 14 22:29:00.665105 containerd[1575]: time="2025-07-14T22:29:00.665026147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" Jul 14 22:29:01.013932 kubelet[2556]: W0714 22:29:01.013858 2556 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Jul 14 22:29:01.014073 kubelet[2556]: E0714 22:29:01.013951 2556 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Jul 14 22:29:01.508906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634971740.mount: Deactivated successfully. Jul 14 22:29:01.527932 containerd[1575]: time="2025-07-14T22:29:01.527846235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:29:01.531183 containerd[1575]: time="2025-07-14T22:29:01.531114575Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:29:01.533184 containerd[1575]: time="2025-07-14T22:29:01.533098009Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 14 22:29:01.534814 containerd[1575]: time="2025-07-14T22:29:01.534668016Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:29:01.536904 containerd[1575]: time="2025-07-14T22:29:01.536807398Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:29:01.539929 containerd[1575]: time="2025-07-14T22:29:01.539842794Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 14 22:29:01.543182 containerd[1575]: time="2025-07-14T22:29:01.543106274Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:29:01.548310 containerd[1575]: time="2025-07-14T22:29:01.548259289Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 14 22:29:01.549695 containerd[1575]: time="2025-07-14T22:29:01.549646910Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 891.882516ms" Jul 14 22:29:01.551747 containerd[1575]: time="2025-07-14T22:29:01.551705027Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 886.557778ms" Jul 14 22:29:01.556668 containerd[1575]: time="2025-07-14T22:29:01.556620470Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 891.49296ms" Jul 14 22:29:01.748525 containerd[1575]: time="2025-07-14T22:29:01.747973525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:29:01.748525 containerd[1575]: time="2025-07-14T22:29:01.748063155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:29:01.748525 containerd[1575]: time="2025-07-14T22:29:01.748084716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:01.748525 containerd[1575]: time="2025-07-14T22:29:01.748212669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:01.750705 containerd[1575]: time="2025-07-14T22:29:01.750583661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:29:01.750705 containerd[1575]: time="2025-07-14T22:29:01.750658564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:29:01.750705 containerd[1575]: time="2025-07-14T22:29:01.750672280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:01.751059 containerd[1575]: time="2025-07-14T22:29:01.750770567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:01.752471 containerd[1575]: time="2025-07-14T22:29:01.750554005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:29:01.752654 containerd[1575]: time="2025-07-14T22:29:01.752495249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:29:01.752654 containerd[1575]: time="2025-07-14T22:29:01.752536397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:01.752817 containerd[1575]: time="2025-07-14T22:29:01.752732309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:01.830605 containerd[1575]: time="2025-07-14T22:29:01.830503900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a0eebd56b9c7a1191777db36232100677b6333951bfc2b2652b8a90e8454594\"" Jul 14 22:29:01.833195 kubelet[2556]: E0714 22:29:01.833173 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:01.835483 containerd[1575]: time="2025-07-14T22:29:01.835448399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1f4fcbcb6423408baf0d91c4bfd5bdcdb2019e9f7c9d864055361e237f2099e\"" Jul 14 22:29:01.835773 containerd[1575]: time="2025-07-14T22:29:01.835712582Z" level=info msg="CreateContainer within sandbox \"3a0eebd56b9c7a1191777db36232100677b6333951bfc2b2652b8a90e8454594\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 14 22:29:01.836134 kubelet[2556]: E0714 22:29:01.836104 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:01.837517 containerd[1575]: time="2025-07-14T22:29:01.837460648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:effe176f2fdb2d03895d5aeb7e1ad64a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b69408a84f852c39289f57025fe2d64812dd22e1c1cad9cf2d4268ab0be36691\"" Jul 14 22:29:01.838023 kubelet[2556]: E0714 22:29:01.838003 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:01.840444 containerd[1575]: time="2025-07-14T22:29:01.840395804Z" level=info msg="CreateContainer within sandbox \"e1f4fcbcb6423408baf0d91c4bfd5bdcdb2019e9f7c9d864055361e237f2099e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 14 22:29:01.853248 containerd[1575]: time="2025-07-14T22:29:01.853205439Z" level=info msg="CreateContainer within sandbox \"b69408a84f852c39289f57025fe2d64812dd22e1c1cad9cf2d4268ab0be36691\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 14 22:29:01.940092 containerd[1575]: time="2025-07-14T22:29:01.940028243Z" level=info msg="CreateContainer within sandbox \"3a0eebd56b9c7a1191777db36232100677b6333951bfc2b2652b8a90e8454594\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a5620b3e77dcbac52513cb07d552ab84f08e4d0d724f91fe51db1cd37b78c0ee\"" Jul 14 22:29:01.940844 containerd[1575]: time="2025-07-14T22:29:01.940815271Z" level=info msg="StartContainer for \"a5620b3e77dcbac52513cb07d552ab84f08e4d0d724f91fe51db1cd37b78c0ee\"" Jul 14 22:29:01.962898 containerd[1575]: time="2025-07-14T22:29:01.962829101Z" level=info msg="CreateContainer within sandbox \"e1f4fcbcb6423408baf0d91c4bfd5bdcdb2019e9f7c9d864055361e237f2099e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f727a2e004f8c58ad73f2878237ef246860049872e50384fa4fa4bbbb9048201\"" Jul 14 22:29:01.964709 containerd[1575]: time="2025-07-14T22:29:01.963528101Z" level=info msg="StartContainer for \"f727a2e004f8c58ad73f2878237ef246860049872e50384fa4fa4bbbb9048201\"" Jul 14 22:29:01.965584 containerd[1575]: time="2025-07-14T22:29:01.965536633Z" level=info msg="CreateContainer within sandbox \"b69408a84f852c39289f57025fe2d64812dd22e1c1cad9cf2d4268ab0be36691\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"945eff7c6e53323937e3de814985b19f1197788afd600700f5d9bd955493aacc\"" Jul 14 22:29:01.966044 containerd[1575]: time="2025-07-14T22:29:01.966018010Z" level=info msg="StartContainer for \"945eff7c6e53323937e3de814985b19f1197788afd600700f5d9bd955493aacc\"" Jul 14 22:29:02.036387 containerd[1575]: time="2025-07-14T22:29:02.036315274Z" level=info msg="StartContainer for \"a5620b3e77dcbac52513cb07d552ab84f08e4d0d724f91fe51db1cd37b78c0ee\" returns successfully" Jul 14 22:29:02.060315 containerd[1575]: time="2025-07-14T22:29:02.060244814Z" level=info msg="StartContainer for \"945eff7c6e53323937e3de814985b19f1197788afd600700f5d9bd955493aacc\" returns successfully" Jul 14 22:29:02.060528 containerd[1575]: time="2025-07-14T22:29:02.060393828Z" level=info msg="StartContainer for \"f727a2e004f8c58ad73f2878237ef246860049872e50384fa4fa4bbbb9048201\" returns successfully" Jul 14 22:29:02.068551 kubelet[2556]: E0714 22:29:02.068498 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:02.072027 kubelet[2556]: E0714 22:29:02.071791 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:02.075558 kubelet[2556]: E0714 22:29:02.074913 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:03.083666 kubelet[2556]: E0714 22:29:03.083199 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:03.815813 kubelet[2556]: I0714 22:29:03.815767 2556 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:29:04.047806 kubelet[2556]: I0714 22:29:04.047730 2556 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:29:04.047806 kubelet[2556]: E0714 22:29:04.047793 2556 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 14 22:29:04.643738 kubelet[2556]: E0714 22:29:04.643672 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:04.744368 kubelet[2556]: E0714 22:29:04.744279 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:04.845051 kubelet[2556]: E0714 22:29:04.844982 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:04.945745 kubelet[2556]: E0714 22:29:04.945528 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:05.046870 kubelet[2556]: E0714 22:29:05.046699 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:05.147733 kubelet[2556]: E0714 22:29:05.147660 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:05.248605 kubelet[2556]: E0714 22:29:05.248420 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:05.349402 kubelet[2556]: E0714 22:29:05.349290 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:05.450194 kubelet[2556]: E0714 22:29:05.450135 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:05.551151 kubelet[2556]: E0714 22:29:05.551063 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:05.567750 kubelet[2556]: E0714 22:29:05.567635 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:05.651790 kubelet[2556]: E0714 22:29:05.651698 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:05.752719 kubelet[2556]: E0714 22:29:05.752626 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:05.854183 kubelet[2556]: E0714 22:29:05.853735 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:05.955069 kubelet[2556]: E0714 22:29:05.954961 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:06.055291 kubelet[2556]: E0714 22:29:06.055191 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:06.155840 kubelet[2556]: E0714 22:29:06.155656 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:06.256638 kubelet[2556]: E0714 22:29:06.256561 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:06.357507 kubelet[2556]: E0714 22:29:06.357416 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:06.458660 kubelet[2556]: E0714 22:29:06.458452 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:06.559079 kubelet[2556]: E0714 22:29:06.559010 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:06.659936 kubelet[2556]: E0714 22:29:06.659798 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:06.760882 kubelet[2556]: E0714 22:29:06.760711 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:06.861822 kubelet[2556]: E0714 22:29:06.861750 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:06.966289 kubelet[2556]: E0714 22:29:06.962797 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:07.019324 kubelet[2556]: E0714 22:29:07.019163 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:07.063368 kubelet[2556]: E0714 22:29:07.063260 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:07.164512 kubelet[2556]: E0714 22:29:07.164329 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:07.265731 kubelet[2556]: E0714 22:29:07.265637 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:07.366731 kubelet[2556]: E0714 22:29:07.366652 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:07.467745 kubelet[2556]: E0714 22:29:07.467680 2556 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:07.504157 kubelet[2556]: E0714 22:29:07.504108 2556 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 14 22:29:08.001366 kubelet[2556]: I0714 22:29:08.001295 2556 apiserver.go:52] "Watching apiserver" Jul 14 22:29:08.011754 kubelet[2556]: I0714 22:29:08.011723 2556 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:29:10.945578 kubelet[2556]: E0714 22:29:10.943291 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:11.092749 kubelet[2556]: E0714 22:29:11.092689 2556 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:12.454614 systemd[1]: Reloading requested from client PID 2838 ('systemctl') (unit session-7.scope)... Jul 14 22:29:12.454645 systemd[1]: Reloading... Jul 14 22:29:12.556394 zram_generator::config[2882]: No configuration found. Jul 14 22:29:12.694673 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 14 22:29:12.787603 systemd[1]: Reloading finished in 332 ms. Jul 14 22:29:12.827916 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:29:12.853058 systemd[1]: kubelet.service: Deactivated successfully. Jul 14 22:29:12.853568 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:29:12.860634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 14 22:29:13.068313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 14 22:29:13.079020 (kubelet)[2933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 14 22:29:13.188074 kubelet[2933]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:29:13.188074 kubelet[2933]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 14 22:29:13.188074 kubelet[2933]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 14 22:29:13.188559 kubelet[2933]: I0714 22:29:13.188140 2933 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 14 22:29:13.194677 kubelet[2933]: I0714 22:29:13.194645 2933 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 14 22:29:13.194677 kubelet[2933]: I0714 22:29:13.194670 2933 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 14 22:29:13.195020 kubelet[2933]: I0714 22:29:13.194992 2933 server.go:934] "Client rotation is on, will bootstrap in background" Jul 14 22:29:13.196283 kubelet[2933]: I0714 22:29:13.196263 2933 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 14 22:29:13.198076 kubelet[2933]: I0714 22:29:13.198051 2933 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 14 22:29:13.202871 kubelet[2933]: E0714 22:29:13.202823 2933 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 14 22:29:13.202871 kubelet[2933]: I0714 22:29:13.202867 2933 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 14 22:29:13.208408 kubelet[2933]: I0714 22:29:13.208331 2933 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 14 22:29:13.209048 kubelet[2933]: I0714 22:29:13.209022 2933 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 14 22:29:13.209255 kubelet[2933]: I0714 22:29:13.209206 2933 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 14 22:29:13.209475 kubelet[2933]: I0714 22:29:13.209262 2933 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 14 22:29:13.209565 kubelet[2933]: I0714 22:29:13.209481 2933 topology_manager.go:138] "Creating topology manager with none policy" Jul 14 22:29:13.209565 kubelet[2933]: I0714 22:29:13.209493 2933 container_manager_linux.go:300] "Creating device plugin manager" Jul 14 22:29:13.209565 kubelet[2933]: I0714 22:29:13.209533 2933 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:29:13.209675 kubelet[2933]: I0714 22:29:13.209659 2933 kubelet.go:408] "Attempting to sync node with API server" Jul 14 22:29:13.209706 kubelet[2933]: I0714 22:29:13.209677 2933 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 14 22:29:13.209736 kubelet[2933]: I0714 22:29:13.209716 2933 kubelet.go:314] "Adding apiserver pod source" Jul 14 22:29:13.209736 kubelet[2933]: I0714 22:29:13.209729 2933 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 14 22:29:13.210639 kubelet[2933]: I0714 22:29:13.210605 2933 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 14 22:29:13.213385 kubelet[2933]: I0714 22:29:13.211133 2933 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 14 22:29:13.213385 kubelet[2933]: I0714 22:29:13.211626 2933 server.go:1274] "Started kubelet" Jul 14 22:29:13.213385 kubelet[2933]: I0714 22:29:13.212424 2933 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 14 22:29:13.213385 kubelet[2933]: I0714 22:29:13.212743 2933 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 14 22:29:13.213385 kubelet[2933]: I0714 22:29:13.212811 2933 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 14 22:29:13.213385 kubelet[2933]: I0714 22:29:13.213316 2933 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 14 22:29:13.213902 kubelet[2933]: I0714 22:29:13.213868 2933 server.go:449] "Adding debug handlers to kubelet server" Jul 14 22:29:13.217038 kubelet[2933]: I0714 22:29:13.216972 2933 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 14 22:29:13.221434 kubelet[2933]: I0714 22:29:13.220960 2933 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 14 22:29:13.221434 kubelet[2933]: E0714 22:29:13.221310 2933 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 14 22:29:13.224133 kubelet[2933]: I0714 22:29:13.221946 2933 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 14 22:29:13.224133 kubelet[2933]: I0714 22:29:13.222133 2933 reconciler.go:26] "Reconciler: start to sync state" Jul 14 22:29:13.253178 kubelet[2933]: I0714 22:29:13.253149 2933 factory.go:221] Registration of the systemd container factory successfully Jul 14 22:29:13.253620 kubelet[2933]: I0714 22:29:13.253567 2933 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 14 22:29:13.255284 kubelet[2933]: E0714 22:29:13.255141 2933 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 14 22:29:13.256160 kubelet[2933]: I0714 22:29:13.256119 2933 factory.go:221] Registration of the containerd container factory successfully Jul 14 22:29:13.263030 kubelet[2933]: I0714 22:29:13.262994 2933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 14 22:29:13.264524 kubelet[2933]: I0714 22:29:13.264501 2933 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 14 22:29:13.264524 kubelet[2933]: I0714 22:29:13.264523 2933 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 14 22:29:13.265180 kubelet[2933]: I0714 22:29:13.264681 2933 kubelet.go:2321] "Starting kubelet main sync loop" Jul 14 22:29:13.265180 kubelet[2933]: E0714 22:29:13.264751 2933 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 14 22:29:13.307589 kubelet[2933]: I0714 22:29:13.307553 2933 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 14 22:29:13.307810 kubelet[2933]: I0714 22:29:13.307755 2933 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 14 22:29:13.307810 kubelet[2933]: I0714 22:29:13.307789 2933 state_mem.go:36] "Initialized new in-memory state store" Jul 14 22:29:13.307992 kubelet[2933]: I0714 22:29:13.307969 2933 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 14 22:29:13.308014 kubelet[2933]: I0714 22:29:13.307984 2933 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 14 22:29:13.308014 kubelet[2933]: I0714 22:29:13.308006 2933 policy_none.go:49] "None policy: Start" Jul 14 22:29:13.308695 kubelet[2933]: I0714 22:29:13.308677 2933 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 14 22:29:13.308753 kubelet[2933]: I0714 22:29:13.308699 2933 state_mem.go:35] "Initializing new in-memory state store" Jul 14 22:29:13.308841 kubelet[2933]: I0714 22:29:13.308826 2933 state_mem.go:75] "Updated machine memory state" Jul 14 22:29:13.311139 kubelet[2933]: I0714 22:29:13.310550 2933 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 14 22:29:13.311139 kubelet[2933]: I0714 22:29:13.310785 2933 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 14 22:29:13.311139 kubelet[2933]: I0714 22:29:13.310799 2933 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 14 22:29:13.311139 kubelet[2933]: I0714 22:29:13.311056 2933 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 14 22:29:13.417670 kubelet[2933]: I0714 22:29:13.417532 2933 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 14 22:29:13.524010 kubelet[2933]: I0714 22:29:13.523950 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:29:13.524010 kubelet[2933]: I0714 22:29:13.524009 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:29:13.524010 kubelet[2933]: I0714 22:29:13.524038 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/effe176f2fdb2d03895d5aeb7e1ad64a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"effe176f2fdb2d03895d5aeb7e1ad64a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:29:13.524274 kubelet[2933]: I0714 22:29:13.524087 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/effe176f2fdb2d03895d5aeb7e1ad64a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"effe176f2fdb2d03895d5aeb7e1ad64a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:29:13.524274 kubelet[2933]: I0714 22:29:13.524182 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:29:13.524274 kubelet[2933]: I0714 22:29:13.524253 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:29:13.524421 kubelet[2933]: I0714 22:29:13.524290 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" Jul 14 22:29:13.524421 kubelet[2933]: I0714 22:29:13.524319 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/effe176f2fdb2d03895d5aeb7e1ad64a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"effe176f2fdb2d03895d5aeb7e1ad64a\") " pod="kube-system/kube-apiserver-localhost" Jul 14 22:29:13.524421 kubelet[2933]: I0714 22:29:13.524370 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" Jul 14 22:29:13.720801 kubelet[2933]: E0714 22:29:13.720506 2933 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 14 22:29:13.720932 kubelet[2933]: E0714 22:29:13.720823 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:13.745882 kubelet[2933]: I0714 22:29:13.745834 2933 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 14 22:29:13.746089 kubelet[2933]: I0714 22:29:13.745965 2933 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 14 22:29:13.860602 kubelet[2933]: E0714 22:29:13.860529 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:13.860602 kubelet[2933]: E0714 22:29:13.860562 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:14.211098 kubelet[2933]: I0714 22:29:14.211024 2933 apiserver.go:52] "Watching apiserver" Jul 14 22:29:14.222566 kubelet[2933]: I0714 22:29:14.222493 2933 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 14 22:29:14.276299 kubelet[2933]: E0714 22:29:14.276236 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:14.276465 kubelet[2933]: E0714 22:29:14.276404 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:14.276560 kubelet[2933]: E0714 22:29:14.276517 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:15.090188 kubelet[2933]: I0714 22:29:15.090067 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.090042091 podStartE2EDuration="2.090042091s" podCreationTimestamp="2025-07-14 22:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:29:15.089934006 +0000 UTC m=+1.973182422" watchObservedRunningTime="2025-07-14 22:29:15.090042091 +0000 UTC m=+1.973290497" Jul 14 22:29:15.276735 kubelet[2933]: E0714 22:29:15.276697 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:15.316376 kubelet[2933]: I0714 22:29:15.316169 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.31613379 podStartE2EDuration="5.31613379s" podCreationTimestamp="2025-07-14 22:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:29:15.248748287 +0000 UTC m=+2.131996693" watchObservedRunningTime="2025-07-14 22:29:15.31613379 +0000 UTC m=+2.199382186" Jul 14 22:29:15.316376 kubelet[2933]: I0714 22:29:15.316276 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.316272072 podStartE2EDuration="2.316272072s" podCreationTimestamp="2025-07-14 22:29:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:29:15.315969179 +0000 UTC m=+2.199217585" watchObservedRunningTime="2025-07-14 22:29:15.316272072 +0000 UTC m=+2.199520478" Jul 14 22:29:17.882600 kubelet[2933]: I0714 22:29:17.882556 2933 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 14 22:29:17.883326 kubelet[2933]: I0714 22:29:17.883248 2933 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 14 22:29:17.883392 containerd[1575]: time="2025-07-14T22:29:17.883029230Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 14 22:29:18.427012 kubelet[2933]: E0714 22:29:18.426887 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:19.282169 kubelet[2933]: E0714 22:29:19.282134 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:19.843108 kubelet[2933]: E0714 22:29:19.843053 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:20.283758 kubelet[2933]: E0714 22:29:20.283725 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:21.285896 kubelet[2933]: E0714 22:29:21.285826 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:22.784323 kubelet[2933]: I0714 22:29:22.784248 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bf6ae90b-7d49-4e75-8398-43fc140396f7-lib-modules\") pod \"kube-proxy-5dnhv\" (UID: \"bf6ae90b-7d49-4e75-8398-43fc140396f7\") " pod="kube-system/kube-proxy-5dnhv" Jul 14 22:29:22.784323 kubelet[2933]: I0714 22:29:22.784300 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bf6ae90b-7d49-4e75-8398-43fc140396f7-kube-proxy\") pod \"kube-proxy-5dnhv\" (UID: \"bf6ae90b-7d49-4e75-8398-43fc140396f7\") " pod="kube-system/kube-proxy-5dnhv" Jul 14 22:29:22.784323 kubelet[2933]: I0714 22:29:22.784321 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bf6ae90b-7d49-4e75-8398-43fc140396f7-xtables-lock\") pod \"kube-proxy-5dnhv\" (UID: \"bf6ae90b-7d49-4e75-8398-43fc140396f7\") " pod="kube-system/kube-proxy-5dnhv" Jul 14 22:29:22.784323 kubelet[2933]: I0714 22:29:22.784361 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9dld\" (UniqueName: \"kubernetes.io/projected/bf6ae90b-7d49-4e75-8398-43fc140396f7-kube-api-access-j9dld\") pod \"kube-proxy-5dnhv\" (UID: \"bf6ae90b-7d49-4e75-8398-43fc140396f7\") " pod="kube-system/kube-proxy-5dnhv" Jul 14 22:29:23.175087 kubelet[2933]: E0714 22:29:23.174936 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:23.289250 kubelet[2933]: E0714 22:29:23.289206 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:23.380506 kubelet[2933]: E0714 22:29:23.380470 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:23.381167 containerd[1575]: time="2025-07-14T22:29:23.381116751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5dnhv,Uid:bf6ae90b-7d49-4e75-8398-43fc140396f7,Namespace:kube-system,Attempt:0,}" Jul 14 22:29:24.583288 containerd[1575]: time="2025-07-14T22:29:24.583162763Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:29:24.583288 containerd[1575]: time="2025-07-14T22:29:24.583222125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:29:24.583288 containerd[1575]: time="2025-07-14T22:29:24.583233066Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:24.583969 containerd[1575]: time="2025-07-14T22:29:24.583499360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:24.624298 containerd[1575]: time="2025-07-14T22:29:24.624249533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5dnhv,Uid:bf6ae90b-7d49-4e75-8398-43fc140396f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f167806815cbee8ec4f679a4f9ede6a1aee2af73d5b456626e4bbeda71b06fc8\"" Jul 14 22:29:24.625263 kubelet[2933]: E0714 22:29:24.625219 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:24.627613 containerd[1575]: time="2025-07-14T22:29:24.627569294Z" level=info msg="CreateContainer within sandbox \"f167806815cbee8ec4f679a4f9ede6a1aee2af73d5b456626e4bbeda71b06fc8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 14 22:29:25.324238 containerd[1575]: time="2025-07-14T22:29:25.324132379Z" level=info msg="CreateContainer within sandbox \"f167806815cbee8ec4f679a4f9ede6a1aee2af73d5b456626e4bbeda71b06fc8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e68331c52b9e9eb1cc57a6850462f0c9b543bf3f28cf34dcf7c13012f7baed1b\"" Jul 14 22:29:25.325228 containerd[1575]: time="2025-07-14T22:29:25.325171856Z" level=info msg="StartContainer for \"e68331c52b9e9eb1cc57a6850462f0c9b543bf3f28cf34dcf7c13012f7baed1b\"" Jul 14 22:29:25.419910 containerd[1575]: time="2025-07-14T22:29:25.419715743Z" level=info msg="StartContainer for \"e68331c52b9e9eb1cc57a6850462f0c9b543bf3f28cf34dcf7c13012f7baed1b\" returns successfully" Jul 14 22:29:26.297527 kubelet[2933]: E0714 22:29:26.297491 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:27.298843 kubelet[2933]: E0714 22:29:27.298781 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:29.409174 kubelet[2933]: I0714 22:29:29.409070 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5dnhv" podStartSLOduration=11.409043612 podStartE2EDuration="11.409043612s" podCreationTimestamp="2025-07-14 22:29:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:29:26.679309589 +0000 UTC m=+13.562557995" watchObservedRunningTime="2025-07-14 22:29:29.409043612 +0000 UTC m=+16.292292018" Jul 14 22:29:29.421392 kubelet[2933]: I0714 22:29:29.421314 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6mpj\" (UniqueName: \"kubernetes.io/projected/2f7810be-1255-4225-b0fb-c3d2d893b643-kube-api-access-v6mpj\") pod \"tigera-operator-5bf8dfcb4-d9lvg\" (UID: \"2f7810be-1255-4225-b0fb-c3d2d893b643\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-d9lvg" Jul 14 22:29:29.421392 kubelet[2933]: I0714 22:29:29.421368 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f7810be-1255-4225-b0fb-c3d2d893b643-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-d9lvg\" (UID: \"2f7810be-1255-4225-b0fb-c3d2d893b643\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-d9lvg" Jul 14 22:29:29.714690 containerd[1575]: time="2025-07-14T22:29:29.714541263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-d9lvg,Uid:2f7810be-1255-4225-b0fb-c3d2d893b643,Namespace:tigera-operator,Attempt:0,}" Jul 14 22:29:29.762374 containerd[1575]: time="2025-07-14T22:29:29.762186192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:29:29.762374 containerd[1575]: time="2025-07-14T22:29:29.762294396Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:29:29.762374 containerd[1575]: time="2025-07-14T22:29:29.762310828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:29.762613 containerd[1575]: time="2025-07-14T22:29:29.762471070Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:29.839767 containerd[1575]: time="2025-07-14T22:29:29.839697577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-d9lvg,Uid:2f7810be-1255-4225-b0fb-c3d2d893b643,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"49fff431d5f92af91befd2d3e5b9eea93c616e639a51a372f022085e1a324889\"" Jul 14 22:29:29.842049 containerd[1575]: time="2025-07-14T22:29:29.842005200Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 14 22:29:35.816021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3042795659.mount: Deactivated successfully. Jul 14 22:29:37.808712 containerd[1575]: time="2025-07-14T22:29:37.808635616Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:29:37.891826 containerd[1575]: time="2025-07-14T22:29:37.891652079Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 14 22:29:37.985582 containerd[1575]: time="2025-07-14T22:29:37.985496935Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:29:38.067280 containerd[1575]: time="2025-07-14T22:29:38.067087672Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:29:38.068080 containerd[1575]: time="2025-07-14T22:29:38.068051142Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 8.225987361s" Jul 14 22:29:38.068138 containerd[1575]: time="2025-07-14T22:29:38.068084044Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 14 22:29:38.070204 containerd[1575]: time="2025-07-14T22:29:38.070164263Z" level=info msg="CreateContainer within sandbox \"49fff431d5f92af91befd2d3e5b9eea93c616e639a51a372f022085e1a324889\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 14 22:29:38.764441 containerd[1575]: time="2025-07-14T22:29:38.764325025Z" level=info msg="CreateContainer within sandbox \"49fff431d5f92af91befd2d3e5b9eea93c616e639a51a372f022085e1a324889\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fe002092bfe39ba13e40f5d6db4eec9a065e20a8a0a24212b6e6e5bcb46ce7e7\"" Jul 14 22:29:38.764933 containerd[1575]: time="2025-07-14T22:29:38.764864563Z" level=info msg="StartContainer for \"fe002092bfe39ba13e40f5d6db4eec9a065e20a8a0a24212b6e6e5bcb46ce7e7\"" Jul 14 22:29:39.584606 containerd[1575]: time="2025-07-14T22:29:39.584526718Z" level=info msg="StartContainer for \"fe002092bfe39ba13e40f5d6db4eec9a065e20a8a0a24212b6e6e5bcb46ce7e7\" returns successfully" Jul 14 22:29:49.718009 sudo[1780]: pam_unix(sudo:session): session closed for user root Jul 14 22:29:49.788934 sshd[1773]: pam_unix(sshd:session): session closed for user core Jul 14 22:29:49.793404 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:43588.service: Deactivated successfully. Jul 14 22:29:49.795889 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Jul 14 22:29:49.795942 systemd[1]: session-7.scope: Deactivated successfully. Jul 14 22:29:49.797460 systemd-logind[1548]: Removed session 7. Jul 14 22:29:58.709269 kubelet[2933]: I0714 22:29:58.708690 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-d9lvg" podStartSLOduration=21.481077082 podStartE2EDuration="29.708670076s" podCreationTimestamp="2025-07-14 22:29:29 +0000 UTC" firstStartedPulling="2025-07-14 22:29:29.841304205 +0000 UTC m=+16.724552621" lastFinishedPulling="2025-07-14 22:29:38.068897209 +0000 UTC m=+24.952145615" observedRunningTime="2025-07-14 22:29:40.862038354 +0000 UTC m=+27.745286760" watchObservedRunningTime="2025-07-14 22:29:58.708670076 +0000 UTC m=+45.591918482" Jul 14 22:29:58.803483 kubelet[2933]: I0714 22:29:58.802837 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/34d3fac0-17ac-4448-9296-a6a13fc77c79-policysync\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.803483 kubelet[2933]: I0714 22:29:58.802892 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/34d3fac0-17ac-4448-9296-a6a13fc77c79-var-lib-calico\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.803483 kubelet[2933]: I0714 22:29:58.802914 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3648e792-10bd-47d5-b1a3-8721cc9e0b3a-typha-certs\") pod \"calico-typha-5cbbf86854-xbrct\" (UID: \"3648e792-10bd-47d5-b1a3-8721cc9e0b3a\") " pod="calico-system/calico-typha-5cbbf86854-xbrct" Jul 14 22:29:58.803483 kubelet[2933]: I0714 22:29:58.802967 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34d3fac0-17ac-4448-9296-a6a13fc77c79-xtables-lock\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.803483 kubelet[2933]: I0714 22:29:58.802995 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/34d3fac0-17ac-4448-9296-a6a13fc77c79-cni-bin-dir\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.803786 kubelet[2933]: I0714 22:29:58.803026 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mxhg\" (UniqueName: \"kubernetes.io/projected/34d3fac0-17ac-4448-9296-a6a13fc77c79-kube-api-access-2mxhg\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.803786 kubelet[2933]: I0714 22:29:58.803080 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34d3fac0-17ac-4448-9296-a6a13fc77c79-tigera-ca-bundle\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.803786 kubelet[2933]: I0714 22:29:58.803102 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3648e792-10bd-47d5-b1a3-8721cc9e0b3a-tigera-ca-bundle\") pod \"calico-typha-5cbbf86854-xbrct\" (UID: \"3648e792-10bd-47d5-b1a3-8721cc9e0b3a\") " pod="calico-system/calico-typha-5cbbf86854-xbrct" Jul 14 22:29:58.803786 kubelet[2933]: I0714 22:29:58.803122 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/34d3fac0-17ac-4448-9296-a6a13fc77c79-node-certs\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.803786 kubelet[2933]: I0714 22:29:58.803139 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/34d3fac0-17ac-4448-9296-a6a13fc77c79-cni-log-dir\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.803944 kubelet[2933]: I0714 22:29:58.803156 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/34d3fac0-17ac-4448-9296-a6a13fc77c79-var-run-calico\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.803944 kubelet[2933]: I0714 22:29:58.803182 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/34d3fac0-17ac-4448-9296-a6a13fc77c79-cni-net-dir\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.803944 kubelet[2933]: I0714 22:29:58.803201 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/34d3fac0-17ac-4448-9296-a6a13fc77c79-flexvol-driver-host\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.803944 kubelet[2933]: I0714 22:29:58.803222 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rhch\" (UniqueName: \"kubernetes.io/projected/3648e792-10bd-47d5-b1a3-8721cc9e0b3a-kube-api-access-9rhch\") pod \"calico-typha-5cbbf86854-xbrct\" (UID: \"3648e792-10bd-47d5-b1a3-8721cc9e0b3a\") " pod="calico-system/calico-typha-5cbbf86854-xbrct" Jul 14 22:29:58.803944 kubelet[2933]: I0714 22:29:58.803253 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34d3fac0-17ac-4448-9296-a6a13fc77c79-lib-modules\") pod \"calico-node-x8b2g\" (UID: \"34d3fac0-17ac-4448-9296-a6a13fc77c79\") " pod="calico-system/calico-node-x8b2g" Jul 14 22:29:58.898522 kubelet[2933]: E0714 22:29:58.898442 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:29:58.903886 kubelet[2933]: I0714 22:29:58.903824 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9a5e1c4b-6531-4d21-a204-77a82ca32ab1-kubelet-dir\") pod \"csi-node-driver-kbjjj\" (UID: \"9a5e1c4b-6531-4d21-a204-77a82ca32ab1\") " pod="calico-system/csi-node-driver-kbjjj" Jul 14 22:29:58.903886 kubelet[2933]: I0714 22:29:58.903870 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9a5e1c4b-6531-4d21-a204-77a82ca32ab1-registration-dir\") pod \"csi-node-driver-kbjjj\" (UID: \"9a5e1c4b-6531-4d21-a204-77a82ca32ab1\") " pod="calico-system/csi-node-driver-kbjjj" Jul 14 22:29:58.904069 kubelet[2933]: I0714 22:29:58.903924 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9a5e1c4b-6531-4d21-a204-77a82ca32ab1-socket-dir\") pod \"csi-node-driver-kbjjj\" (UID: \"9a5e1c4b-6531-4d21-a204-77a82ca32ab1\") " pod="calico-system/csi-node-driver-kbjjj" Jul 14 22:29:58.904069 kubelet[2933]: I0714 22:29:58.903945 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjhxz\" (UniqueName: \"kubernetes.io/projected/9a5e1c4b-6531-4d21-a204-77a82ca32ab1-kube-api-access-vjhxz\") pod \"csi-node-driver-kbjjj\" (UID: \"9a5e1c4b-6531-4d21-a204-77a82ca32ab1\") " pod="calico-system/csi-node-driver-kbjjj" Jul 14 22:29:58.904069 kubelet[2933]: I0714 22:29:58.903997 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9a5e1c4b-6531-4d21-a204-77a82ca32ab1-varrun\") pod \"csi-node-driver-kbjjj\" (UID: \"9a5e1c4b-6531-4d21-a204-77a82ca32ab1\") " pod="calico-system/csi-node-driver-kbjjj" Jul 14 22:29:58.906659 kubelet[2933]: E0714 22:29:58.905820 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.906659 kubelet[2933]: W0714 22:29:58.905843 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.906659 kubelet[2933]: E0714 22:29:58.905870 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.906659 kubelet[2933]: E0714 22:29:58.906196 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.906659 kubelet[2933]: W0714 22:29:58.906207 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.906659 kubelet[2933]: E0714 22:29:58.906241 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.906659 kubelet[2933]: E0714 22:29:58.906585 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.906659 kubelet[2933]: W0714 22:29:58.906598 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.906659 kubelet[2933]: E0714 22:29:58.906625 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.906983 kubelet[2933]: E0714 22:29:58.906923 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.906983 kubelet[2933]: W0714 22:29:58.906935 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.907034 kubelet[2933]: E0714 22:29:58.906999 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.908473 kubelet[2933]: E0714 22:29:58.907655 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.908473 kubelet[2933]: W0714 22:29:58.907698 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.908473 kubelet[2933]: E0714 22:29:58.907758 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.909224 kubelet[2933]: E0714 22:29:58.908948 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.909224 kubelet[2933]: W0714 22:29:58.908978 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.909224 kubelet[2933]: E0714 22:29:58.909145 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.911374 kubelet[2933]: E0714 22:29:58.910280 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.911374 kubelet[2933]: W0714 22:29:58.910295 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.911374 kubelet[2933]: E0714 22:29:58.910388 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.911374 kubelet[2933]: E0714 22:29:58.910523 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.911374 kubelet[2933]: W0714 22:29:58.910531 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.911374 kubelet[2933]: E0714 22:29:58.910610 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.911374 kubelet[2933]: E0714 22:29:58.910793 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.911374 kubelet[2933]: W0714 22:29:58.910803 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.911374 kubelet[2933]: E0714 22:29:58.910886 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.911374 kubelet[2933]: E0714 22:29:58.911085 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.912724 kubelet[2933]: W0714 22:29:58.911100 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.912724 kubelet[2933]: E0714 22:29:58.911193 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.912724 kubelet[2933]: E0714 22:29:58.911374 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.912724 kubelet[2933]: W0714 22:29:58.911383 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.912724 kubelet[2933]: E0714 22:29:58.911469 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.912724 kubelet[2933]: E0714 22:29:58.911606 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.912724 kubelet[2933]: W0714 22:29:58.911614 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.912724 kubelet[2933]: E0714 22:29:58.911625 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.912724 kubelet[2933]: E0714 22:29:58.911837 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.912724 kubelet[2933]: W0714 22:29:58.911845 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.913045 kubelet[2933]: E0714 22:29:58.911937 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.913045 kubelet[2933]: E0714 22:29:58.912096 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.913045 kubelet[2933]: W0714 22:29:58.912105 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.913045 kubelet[2933]: E0714 22:29:58.912203 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.913045 kubelet[2933]: E0714 22:29:58.912330 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.913045 kubelet[2933]: W0714 22:29:58.912352 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.913045 kubelet[2933]: E0714 22:29:58.912432 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.913045 kubelet[2933]: E0714 22:29:58.912675 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.913045 kubelet[2933]: W0714 22:29:58.912683 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.913045 kubelet[2933]: E0714 22:29:58.912758 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.913391 kubelet[2933]: E0714 22:29:58.912899 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.913391 kubelet[2933]: W0714 22:29:58.912907 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.913391 kubelet[2933]: E0714 22:29:58.912983 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.913391 kubelet[2933]: E0714 22:29:58.913197 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.913391 kubelet[2933]: W0714 22:29:58.913206 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.913391 kubelet[2933]: E0714 22:29:58.913218 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.918259 kubelet[2933]: E0714 22:29:58.915008 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.918259 kubelet[2933]: W0714 22:29:58.915028 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.918259 kubelet[2933]: E0714 22:29:58.915109 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.918259 kubelet[2933]: E0714 22:29:58.915247 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.918259 kubelet[2933]: W0714 22:29:58.915255 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.918259 kubelet[2933]: E0714 22:29:58.915332 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.918259 kubelet[2933]: E0714 22:29:58.915535 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.918259 kubelet[2933]: W0714 22:29:58.915544 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.918259 kubelet[2933]: E0714 22:29:58.915640 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.918259 kubelet[2933]: E0714 22:29:58.915815 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.919697 kubelet[2933]: W0714 22:29:58.915825 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.919697 kubelet[2933]: E0714 22:29:58.915900 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.919697 kubelet[2933]: E0714 22:29:58.916121 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.919697 kubelet[2933]: W0714 22:29:58.916132 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.919697 kubelet[2933]: E0714 22:29:58.916142 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.919697 kubelet[2933]: E0714 22:29:58.917067 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.919697 kubelet[2933]: W0714 22:29:58.917076 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.919697 kubelet[2933]: E0714 22:29:58.917087 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.922452 kubelet[2933]: E0714 22:29:58.922428 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.922452 kubelet[2933]: W0714 22:29:58.922450 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.922561 kubelet[2933]: E0714 22:29:58.922467 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.927465 kubelet[2933]: E0714 22:29:58.927431 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.927915 kubelet[2933]: W0714 22:29:58.927753 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.927915 kubelet[2933]: E0714 22:29:58.927790 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.930751 kubelet[2933]: E0714 22:29:58.930719 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.930751 kubelet[2933]: W0714 22:29:58.930743 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.930751 kubelet[2933]: E0714 22:29:58.930763 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:58.937522 kubelet[2933]: E0714 22:29:58.937473 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:58.937522 kubelet[2933]: W0714 22:29:58.937502 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:58.937522 kubelet[2933]: E0714 22:29:58.937530 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.004883 kubelet[2933]: E0714 22:29:59.004736 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.004883 kubelet[2933]: W0714 22:29:59.004768 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.004883 kubelet[2933]: E0714 22:29:59.004793 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.005090 kubelet[2933]: E0714 22:29:59.005081 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.005130 kubelet[2933]: W0714 22:29:59.005091 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.005130 kubelet[2933]: E0714 22:29:59.005110 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.005428 kubelet[2933]: E0714 22:29:59.005412 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.005428 kubelet[2933]: W0714 22:29:59.005425 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.005527 kubelet[2933]: E0714 22:29:59.005452 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.005837 kubelet[2933]: E0714 22:29:59.005821 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.005837 kubelet[2933]: W0714 22:29:59.005835 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.005908 kubelet[2933]: E0714 22:29:59.005852 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.006142 kubelet[2933]: E0714 22:29:59.006127 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.006142 kubelet[2933]: W0714 22:29:59.006139 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.006291 kubelet[2933]: E0714 22:29:59.006238 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.006704 kubelet[2933]: E0714 22:29:59.006664 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.006704 kubelet[2933]: W0714 22:29:59.006702 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.006816 kubelet[2933]: E0714 22:29:59.006802 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.007096 kubelet[2933]: E0714 22:29:59.007076 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.007096 kubelet[2933]: W0714 22:29:59.007089 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.007219 kubelet[2933]: E0714 22:29:59.007123 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.007386 kubelet[2933]: E0714 22:29:59.007369 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.007386 kubelet[2933]: W0714 22:29:59.007381 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.007468 kubelet[2933]: E0714 22:29:59.007414 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.007740 kubelet[2933]: E0714 22:29:59.007721 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.007740 kubelet[2933]: W0714 22:29:59.007737 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.007871 kubelet[2933]: E0714 22:29:59.007841 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.008035 kubelet[2933]: E0714 22:29:59.008016 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.008035 kubelet[2933]: W0714 22:29:59.008029 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.008118 kubelet[2933]: E0714 22:29:59.008066 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.008322 kubelet[2933]: E0714 22:29:59.008303 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.008322 kubelet[2933]: W0714 22:29:59.008317 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.008425 kubelet[2933]: E0714 22:29:59.008368 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.008586 kubelet[2933]: E0714 22:29:59.008568 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.008586 kubelet[2933]: W0714 22:29:59.008582 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.008661 kubelet[2933]: E0714 22:29:59.008621 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.008860 kubelet[2933]: E0714 22:29:59.008840 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.008860 kubelet[2933]: W0714 22:29:59.008856 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.008951 kubelet[2933]: E0714 22:29:59.008892 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.009130 kubelet[2933]: E0714 22:29:59.009107 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.009130 kubelet[2933]: W0714 22:29:59.009124 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.009215 kubelet[2933]: E0714 22:29:59.009146 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.009503 kubelet[2933]: E0714 22:29:59.009482 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.009575 kubelet[2933]: W0714 22:29:59.009498 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.009622 kubelet[2933]: E0714 22:29:59.009603 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.009859 kubelet[2933]: E0714 22:29:59.009836 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.009859 kubelet[2933]: W0714 22:29:59.009853 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.009935 kubelet[2933]: E0714 22:29:59.009888 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.010095 kubelet[2933]: E0714 22:29:59.010079 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.010095 kubelet[2933]: W0714 22:29:59.010090 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.010168 kubelet[2933]: E0714 22:29:59.010122 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.010369 kubelet[2933]: E0714 22:29:59.010322 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.010369 kubelet[2933]: W0714 22:29:59.010337 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.010442 kubelet[2933]: E0714 22:29:59.010396 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.010608 kubelet[2933]: E0714 22:29:59.010592 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.010608 kubelet[2933]: W0714 22:29:59.010604 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.010676 kubelet[2933]: E0714 22:29:59.010640 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.010905 kubelet[2933]: E0714 22:29:59.010884 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.010905 kubelet[2933]: W0714 22:29:59.010899 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.010990 kubelet[2933]: E0714 22:29:59.010921 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.011437 kubelet[2933]: E0714 22:29:59.011325 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.011437 kubelet[2933]: W0714 22:29:59.011390 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.011437 kubelet[2933]: E0714 22:29:59.011414 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.011741 kubelet[2933]: E0714 22:29:59.011720 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.011741 kubelet[2933]: W0714 22:29:59.011737 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.011814 kubelet[2933]: E0714 22:29:59.011757 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.012069 kubelet[2933]: E0714 22:29:59.012048 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.012069 kubelet[2933]: W0714 22:29:59.012065 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.012212 kubelet[2933]: E0714 22:29:59.012086 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.012436 kubelet[2933]: E0714 22:29:59.012417 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.012436 kubelet[2933]: W0714 22:29:59.012432 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.012503 kubelet[2933]: E0714 22:29:59.012449 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.012798 kubelet[2933]: E0714 22:29:59.012773 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.012798 kubelet[2933]: W0714 22:29:59.012792 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.012898 kubelet[2933]: E0714 22:29:59.012805 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.020744 kubelet[2933]: E0714 22:29:59.020711 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:29:59.020744 kubelet[2933]: W0714 22:29:59.020730 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:29:59.020744 kubelet[2933]: E0714 22:29:59.020751 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:29:59.027845 kubelet[2933]: E0714 22:29:59.027802 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:59.028779 containerd[1575]: time="2025-07-14T22:29:59.028731000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cbbf86854-xbrct,Uid:3648e792-10bd-47d5-b1a3-8721cc9e0b3a,Namespace:calico-system,Attempt:0,}" Jul 14 22:29:59.068971 containerd[1575]: time="2025-07-14T22:29:59.068666323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:29:59.069671 containerd[1575]: time="2025-07-14T22:29:59.068735613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:29:59.069671 containerd[1575]: time="2025-07-14T22:29:59.068750170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:59.069671 containerd[1575]: time="2025-07-14T22:29:59.068846512Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:59.096183 containerd[1575]: time="2025-07-14T22:29:59.096130663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x8b2g,Uid:34d3fac0-17ac-4448-9296-a6a13fc77c79,Namespace:calico-system,Attempt:0,}" Jul 14 22:29:59.136076 containerd[1575]: time="2025-07-14T22:29:59.135646484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:29:59.136076 containerd[1575]: time="2025-07-14T22:29:59.135715234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:29:59.136076 containerd[1575]: time="2025-07-14T22:29:59.135733298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:59.137000 containerd[1575]: time="2025-07-14T22:29:59.136743673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:29:59.137623 containerd[1575]: time="2025-07-14T22:29:59.137522773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cbbf86854-xbrct,Uid:3648e792-10bd-47d5-b1a3-8721cc9e0b3a,Namespace:calico-system,Attempt:0,} returns sandbox id \"092d97676c3dae2f04a8417f24d2820d4fb5562e3b9936e25d626a9f71a57451\"" Jul 14 22:29:59.143967 kubelet[2933]: E0714 22:29:59.143942 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:29:59.154305 containerd[1575]: time="2025-07-14T22:29:59.154052877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 14 22:29:59.180635 containerd[1575]: time="2025-07-14T22:29:59.180581203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x8b2g,Uid:34d3fac0-17ac-4448-9296-a6a13fc77c79,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9f7f94f039e2219b396616d315e427290c86cfd627799546ddb9efa3762390c\"" Jul 14 22:30:00.265756 kubelet[2933]: E0714 22:30:00.265658 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:01.263166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2758381331.mount: Deactivated successfully. Jul 14 22:30:02.265945 kubelet[2933]: E0714 22:30:02.265854 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:03.302998 containerd[1575]: time="2025-07-14T22:30:03.302931057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:03.374846 containerd[1575]: time="2025-07-14T22:30:03.374755755Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 14 22:30:03.445392 containerd[1575]: time="2025-07-14T22:30:03.445286353Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:03.493180 containerd[1575]: time="2025-07-14T22:30:03.493033606Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:03.493989 containerd[1575]: time="2025-07-14T22:30:03.493922261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 4.339823427s" Jul 14 22:30:03.493989 containerd[1575]: time="2025-07-14T22:30:03.493975982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 14 22:30:03.495783 containerd[1575]: time="2025-07-14T22:30:03.495761769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 14 22:30:03.512691 containerd[1575]: time="2025-07-14T22:30:03.512489139Z" level=info msg="CreateContainer within sandbox \"092d97676c3dae2f04a8417f24d2820d4fb5562e3b9936e25d626a9f71a57451\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 14 22:30:04.265235 kubelet[2933]: E0714 22:30:04.265169 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:06.265508 kubelet[2933]: E0714 22:30:06.265449 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:06.668737 containerd[1575]: time="2025-07-14T22:30:06.668655547Z" level=info msg="CreateContainer within sandbox \"092d97676c3dae2f04a8417f24d2820d4fb5562e3b9936e25d626a9f71a57451\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8aec019bb775c6c1c691b7d3dc96447617e79def36a0f7520a8a75c5f5015bf8\"" Jul 14 22:30:06.673278 containerd[1575]: time="2025-07-14T22:30:06.673222203Z" level=info msg="StartContainer for \"8aec019bb775c6c1c691b7d3dc96447617e79def36a0f7520a8a75c5f5015bf8\"" Jul 14 22:30:08.050913 containerd[1575]: time="2025-07-14T22:30:08.050833310Z" level=info msg="StartContainer for \"8aec019bb775c6c1c691b7d3dc96447617e79def36a0f7520a8a75c5f5015bf8\" returns successfully" Jul 14 22:30:08.265988 kubelet[2933]: E0714 22:30:08.265924 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:09.055713 kubelet[2933]: E0714 22:30:09.055673 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:09.061688 kubelet[2933]: E0714 22:30:09.061655 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.061688 kubelet[2933]: W0714 22:30:09.061676 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.061688 kubelet[2933]: E0714 22:30:09.061695 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.062103 kubelet[2933]: E0714 22:30:09.062071 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.062103 kubelet[2933]: W0714 22:30:09.062087 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.062103 kubelet[2933]: E0714 22:30:09.062098 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.062358 kubelet[2933]: E0714 22:30:09.062326 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.062417 kubelet[2933]: W0714 22:30:09.062363 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.062417 kubelet[2933]: E0714 22:30:09.062373 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.062616 kubelet[2933]: E0714 22:30:09.062598 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.062616 kubelet[2933]: W0714 22:30:09.062609 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.062697 kubelet[2933]: E0714 22:30:09.062620 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.062881 kubelet[2933]: E0714 22:30:09.062852 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.062881 kubelet[2933]: W0714 22:30:09.062864 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.062881 kubelet[2933]: E0714 22:30:09.062872 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.063109 kubelet[2933]: E0714 22:30:09.063090 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.063109 kubelet[2933]: W0714 22:30:09.063101 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.063109 kubelet[2933]: E0714 22:30:09.063109 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.063333 kubelet[2933]: E0714 22:30:09.063315 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.063333 kubelet[2933]: W0714 22:30:09.063326 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.063431 kubelet[2933]: E0714 22:30:09.063334 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.063563 kubelet[2933]: E0714 22:30:09.063545 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.063563 kubelet[2933]: W0714 22:30:09.063556 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.063563 kubelet[2933]: E0714 22:30:09.063565 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.063803 kubelet[2933]: E0714 22:30:09.063784 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.063803 kubelet[2933]: W0714 22:30:09.063795 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.063803 kubelet[2933]: E0714 22:30:09.063803 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.064053 kubelet[2933]: E0714 22:30:09.064024 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.064053 kubelet[2933]: W0714 22:30:09.064046 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.064053 kubelet[2933]: E0714 22:30:09.064056 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.064287 kubelet[2933]: E0714 22:30:09.064269 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.064287 kubelet[2933]: W0714 22:30:09.064279 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.064287 kubelet[2933]: E0714 22:30:09.064287 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.064535 kubelet[2933]: E0714 22:30:09.064515 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.064535 kubelet[2933]: W0714 22:30:09.064526 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.064535 kubelet[2933]: E0714 22:30:09.064535 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.064762 kubelet[2933]: E0714 22:30:09.064744 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.064762 kubelet[2933]: W0714 22:30:09.064755 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.064762 kubelet[2933]: E0714 22:30:09.064763 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.065007 kubelet[2933]: E0714 22:30:09.064977 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.065007 kubelet[2933]: W0714 22:30:09.064990 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.065007 kubelet[2933]: E0714 22:30:09.064998 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.065213 kubelet[2933]: E0714 22:30:09.065208 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.065239 kubelet[2933]: W0714 22:30:09.065215 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.065239 kubelet[2933]: E0714 22:30:09.065223 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.074851 kubelet[2933]: E0714 22:30:09.074812 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.074851 kubelet[2933]: W0714 22:30:09.074840 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.074982 kubelet[2933]: E0714 22:30:09.074866 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.075159 kubelet[2933]: E0714 22:30:09.075127 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.075159 kubelet[2933]: W0714 22:30:09.075149 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.075236 kubelet[2933]: E0714 22:30:09.075170 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.075500 kubelet[2933]: E0714 22:30:09.075468 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.075500 kubelet[2933]: W0714 22:30:09.075487 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.075592 kubelet[2933]: E0714 22:30:09.075503 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.075837 kubelet[2933]: E0714 22:30:09.075815 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.075837 kubelet[2933]: W0714 22:30:09.075834 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.075922 kubelet[2933]: E0714 22:30:09.075865 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.076222 kubelet[2933]: E0714 22:30:09.076199 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.076222 kubelet[2933]: W0714 22:30:09.076218 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.076330 kubelet[2933]: E0714 22:30:09.076239 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.076543 kubelet[2933]: E0714 22:30:09.076524 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.076543 kubelet[2933]: W0714 22:30:09.076538 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.076627 kubelet[2933]: E0714 22:30:09.076558 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.076810 kubelet[2933]: E0714 22:30:09.076788 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.076810 kubelet[2933]: W0714 22:30:09.076802 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.076884 kubelet[2933]: E0714 22:30:09.076818 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.077051 kubelet[2933]: E0714 22:30:09.077021 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.077051 kubelet[2933]: W0714 22:30:09.077041 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.077130 kubelet[2933]: E0714 22:30:09.077054 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.077298 kubelet[2933]: E0714 22:30:09.077279 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.077298 kubelet[2933]: W0714 22:30:09.077289 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.077403 kubelet[2933]: E0714 22:30:09.077303 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.077590 kubelet[2933]: E0714 22:30:09.077570 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.077590 kubelet[2933]: W0714 22:30:09.077582 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.077674 kubelet[2933]: E0714 22:30:09.077619 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.077790 kubelet[2933]: E0714 22:30:09.077771 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.077790 kubelet[2933]: W0714 22:30:09.077782 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.077862 kubelet[2933]: E0714 22:30:09.077808 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.077978 kubelet[2933]: E0714 22:30:09.077957 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.077978 kubelet[2933]: W0714 22:30:09.077968 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.077978 kubelet[2933]: E0714 22:30:09.077980 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.078253 kubelet[2933]: E0714 22:30:09.078234 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.078253 kubelet[2933]: W0714 22:30:09.078245 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.078374 kubelet[2933]: E0714 22:30:09.078259 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.078556 kubelet[2933]: E0714 22:30:09.078533 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.078556 kubelet[2933]: W0714 22:30:09.078548 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.078644 kubelet[2933]: E0714 22:30:09.078567 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.078820 kubelet[2933]: E0714 22:30:09.078797 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.078820 kubelet[2933]: W0714 22:30:09.078811 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.078897 kubelet[2933]: E0714 22:30:09.078827 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.079085 kubelet[2933]: E0714 22:30:09.079062 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.079085 kubelet[2933]: W0714 22:30:09.079073 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.079085 kubelet[2933]: E0714 22:30:09.079086 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.079420 kubelet[2933]: E0714 22:30:09.079396 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.079420 kubelet[2933]: W0714 22:30:09.079410 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.079420 kubelet[2933]: E0714 22:30:09.079424 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:09.079675 kubelet[2933]: E0714 22:30:09.079649 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:09.079675 kubelet[2933]: W0714 22:30:09.079665 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:09.079675 kubelet[2933]: E0714 22:30:09.079677 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.057899 kubelet[2933]: E0714 22:30:10.057813 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:10.072005 kubelet[2933]: E0714 22:30:10.071963 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.072005 kubelet[2933]: W0714 22:30:10.071994 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.072204 kubelet[2933]: E0714 22:30:10.072033 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.072360 kubelet[2933]: E0714 22:30:10.072322 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.072436 kubelet[2933]: W0714 22:30:10.072338 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.072436 kubelet[2933]: E0714 22:30:10.072378 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.073420 kubelet[2933]: E0714 22:30:10.073384 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.073420 kubelet[2933]: W0714 22:30:10.073401 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.073420 kubelet[2933]: E0714 22:30:10.073416 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.073670 kubelet[2933]: E0714 22:30:10.073648 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.073670 kubelet[2933]: W0714 22:30:10.073664 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.073728 kubelet[2933]: E0714 22:30:10.073680 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.073948 kubelet[2933]: E0714 22:30:10.073914 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.073948 kubelet[2933]: W0714 22:30:10.073928 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.073948 kubelet[2933]: E0714 22:30:10.073939 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.074182 kubelet[2933]: E0714 22:30:10.074163 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.074182 kubelet[2933]: W0714 22:30:10.074175 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.074243 kubelet[2933]: E0714 22:30:10.074185 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.074486 kubelet[2933]: E0714 22:30:10.074447 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.074486 kubelet[2933]: W0714 22:30:10.074479 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.074602 kubelet[2933]: E0714 22:30:10.074512 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.074795 kubelet[2933]: E0714 22:30:10.074759 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.074795 kubelet[2933]: W0714 22:30:10.074773 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.074795 kubelet[2933]: E0714 22:30:10.074793 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.075101 kubelet[2933]: E0714 22:30:10.075078 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.075101 kubelet[2933]: W0714 22:30:10.075093 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.075164 kubelet[2933]: E0714 22:30:10.075104 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.075398 kubelet[2933]: E0714 22:30:10.075337 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.075398 kubelet[2933]: W0714 22:30:10.075367 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.075398 kubelet[2933]: E0714 22:30:10.075375 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.075620 kubelet[2933]: E0714 22:30:10.075605 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.075620 kubelet[2933]: W0714 22:30:10.075616 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.075684 kubelet[2933]: E0714 22:30:10.075625 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.075825 kubelet[2933]: E0714 22:30:10.075810 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.075825 kubelet[2933]: W0714 22:30:10.075820 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.075825 kubelet[2933]: E0714 22:30:10.075829 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.076076 kubelet[2933]: E0714 22:30:10.076060 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.076076 kubelet[2933]: W0714 22:30:10.076072 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.076122 kubelet[2933]: E0714 22:30:10.076082 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.076275 kubelet[2933]: E0714 22:30:10.076261 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.076275 kubelet[2933]: W0714 22:30:10.076271 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.076334 kubelet[2933]: E0714 22:30:10.076280 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.076470 kubelet[2933]: E0714 22:30:10.076456 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.076470 kubelet[2933]: W0714 22:30:10.076466 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.076520 kubelet[2933]: E0714 22:30:10.076477 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.081813 kubelet[2933]: E0714 22:30:10.081774 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.081813 kubelet[2933]: W0714 22:30:10.081793 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.081813 kubelet[2933]: E0714 22:30:10.081807 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.082145 kubelet[2933]: E0714 22:30:10.082115 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.082145 kubelet[2933]: W0714 22:30:10.082127 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.082145 kubelet[2933]: E0714 22:30:10.082141 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.082434 kubelet[2933]: E0714 22:30:10.082413 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.082434 kubelet[2933]: W0714 22:30:10.082425 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.082499 kubelet[2933]: E0714 22:30:10.082438 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.082674 kubelet[2933]: E0714 22:30:10.082649 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.082674 kubelet[2933]: W0714 22:30:10.082668 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.082723 kubelet[2933]: E0714 22:30:10.082684 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.082910 kubelet[2933]: E0714 22:30:10.082893 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.082910 kubelet[2933]: W0714 22:30:10.082905 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.082975 kubelet[2933]: E0714 22:30:10.082919 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.083143 kubelet[2933]: E0714 22:30:10.083129 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.083143 kubelet[2933]: W0714 22:30:10.083140 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.083196 kubelet[2933]: E0714 22:30:10.083166 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.083411 kubelet[2933]: E0714 22:30:10.083397 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.083411 kubelet[2933]: W0714 22:30:10.083408 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.083479 kubelet[2933]: E0714 22:30:10.083422 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.083701 kubelet[2933]: E0714 22:30:10.083682 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.083742 kubelet[2933]: W0714 22:30:10.083701 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.083742 kubelet[2933]: E0714 22:30:10.083722 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.083965 kubelet[2933]: E0714 22:30:10.083949 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.083965 kubelet[2933]: W0714 22:30:10.083962 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.084039 kubelet[2933]: E0714 22:30:10.083994 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.084196 kubelet[2933]: E0714 22:30:10.084183 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.084224 kubelet[2933]: W0714 22:30:10.084195 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.084252 kubelet[2933]: E0714 22:30:10.084221 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.084419 kubelet[2933]: E0714 22:30:10.084406 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.084462 kubelet[2933]: W0714 22:30:10.084418 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.084462 kubelet[2933]: E0714 22:30:10.084434 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.084703 kubelet[2933]: E0714 22:30:10.084684 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.084703 kubelet[2933]: W0714 22:30:10.084698 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.084751 kubelet[2933]: E0714 22:30:10.084714 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.084966 kubelet[2933]: E0714 22:30:10.084946 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.084966 kubelet[2933]: W0714 22:30:10.084958 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.085031 kubelet[2933]: E0714 22:30:10.084974 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.085240 kubelet[2933]: E0714 22:30:10.085223 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.085240 kubelet[2933]: W0714 22:30:10.085236 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.085286 kubelet[2933]: E0714 22:30:10.085252 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.085445 kubelet[2933]: E0714 22:30:10.085431 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.085445 kubelet[2933]: W0714 22:30:10.085441 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.085536 kubelet[2933]: E0714 22:30:10.085456 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.085692 kubelet[2933]: E0714 22:30:10.085674 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.085692 kubelet[2933]: W0714 22:30:10.085687 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.085744 kubelet[2933]: E0714 22:30:10.085704 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.085960 kubelet[2933]: E0714 22:30:10.085946 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.085996 kubelet[2933]: W0714 22:30:10.085959 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.085996 kubelet[2933]: E0714 22:30:10.085972 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.086520 kubelet[2933]: E0714 22:30:10.086506 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:10.086520 kubelet[2933]: W0714 22:30:10.086518 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:10.086575 kubelet[2933]: E0714 22:30:10.086530 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:10.132856 kubelet[2933]: I0714 22:30:10.131798 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cbbf86854-xbrct" podStartSLOduration=7.7816167830000005 podStartE2EDuration="12.13177205s" podCreationTimestamp="2025-07-14 22:29:58 +0000 UTC" firstStartedPulling="2025-07-14 22:29:59.144937501 +0000 UTC m=+46.028185907" lastFinishedPulling="2025-07-14 22:30:03.495092768 +0000 UTC m=+50.378341174" observedRunningTime="2025-07-14 22:30:09.318505183 +0000 UTC m=+56.201753589" watchObservedRunningTime="2025-07-14 22:30:10.13177205 +0000 UTC m=+57.015020456" Jul 14 22:30:10.265478 kubelet[2933]: E0714 22:30:10.265388 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:11.092048 kubelet[2933]: E0714 22:30:11.091987 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:11.185332 kubelet[2933]: E0714 22:30:11.185261 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.185332 kubelet[2933]: W0714 22:30:11.185295 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.185332 kubelet[2933]: E0714 22:30:11.185321 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.185594 kubelet[2933]: E0714 22:30:11.185584 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.185594 kubelet[2933]: W0714 22:30:11.185592 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.185645 kubelet[2933]: E0714 22:30:11.185603 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.185828 kubelet[2933]: E0714 22:30:11.185800 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.185828 kubelet[2933]: W0714 22:30:11.185810 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.185828 kubelet[2933]: E0714 22:30:11.185819 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.186143 kubelet[2933]: E0714 22:30:11.186112 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.186143 kubelet[2933]: W0714 22:30:11.186124 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.186143 kubelet[2933]: E0714 22:30:11.186132 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.186382 kubelet[2933]: E0714 22:30:11.186337 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.186382 kubelet[2933]: W0714 22:30:11.186369 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.186382 kubelet[2933]: E0714 22:30:11.186379 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.186596 kubelet[2933]: E0714 22:30:11.186572 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.186596 kubelet[2933]: W0714 22:30:11.186581 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.186596 kubelet[2933]: E0714 22:30:11.186589 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.186791 kubelet[2933]: E0714 22:30:11.186769 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.186791 kubelet[2933]: W0714 22:30:11.186780 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.186791 kubelet[2933]: E0714 22:30:11.186789 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.186998 kubelet[2933]: E0714 22:30:11.186974 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.186998 kubelet[2933]: W0714 22:30:11.186984 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.187054 kubelet[2933]: E0714 22:30:11.187005 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.187229 kubelet[2933]: E0714 22:30:11.187211 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.187229 kubelet[2933]: W0714 22:30:11.187224 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.187300 kubelet[2933]: E0714 22:30:11.187235 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.187477 kubelet[2933]: E0714 22:30:11.187461 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.187477 kubelet[2933]: W0714 22:30:11.187471 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.187537 kubelet[2933]: E0714 22:30:11.187481 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.187685 kubelet[2933]: E0714 22:30:11.187669 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.187685 kubelet[2933]: W0714 22:30:11.187679 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.187738 kubelet[2933]: E0714 22:30:11.187688 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.187894 kubelet[2933]: E0714 22:30:11.187880 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.187894 kubelet[2933]: W0714 22:30:11.187890 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.187940 kubelet[2933]: E0714 22:30:11.187900 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.188141 kubelet[2933]: E0714 22:30:11.188122 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.188141 kubelet[2933]: W0714 22:30:11.188132 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.188141 kubelet[2933]: E0714 22:30:11.188141 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.188370 kubelet[2933]: E0714 22:30:11.188334 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.188370 kubelet[2933]: W0714 22:30:11.188360 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.188370 kubelet[2933]: E0714 22:30:11.188370 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.188583 kubelet[2933]: E0714 22:30:11.188566 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.188583 kubelet[2933]: W0714 22:30:11.188575 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.188583 kubelet[2933]: E0714 22:30:11.188583 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.191152 kubelet[2933]: E0714 22:30:11.191099 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.191152 kubelet[2933]: W0714 22:30:11.191131 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.191234 kubelet[2933]: E0714 22:30:11.191156 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.191484 kubelet[2933]: E0714 22:30:11.191448 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.191484 kubelet[2933]: W0714 22:30:11.191465 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.191569 kubelet[2933]: E0714 22:30:11.191490 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.191816 kubelet[2933]: E0714 22:30:11.191792 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.191816 kubelet[2933]: W0714 22:30:11.191807 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.191889 kubelet[2933]: E0714 22:30:11.191825 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.192126 kubelet[2933]: E0714 22:30:11.192093 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.192126 kubelet[2933]: W0714 22:30:11.192109 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.192126 kubelet[2933]: E0714 22:30:11.192124 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.192383 kubelet[2933]: E0714 22:30:11.192360 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.192383 kubelet[2933]: W0714 22:30:11.192374 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.192459 kubelet[2933]: E0714 22:30:11.192390 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.192650 kubelet[2933]: E0714 22:30:11.192626 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.192650 kubelet[2933]: W0714 22:30:11.192642 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.192717 kubelet[2933]: E0714 22:30:11.192672 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.192909 kubelet[2933]: E0714 22:30:11.192882 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.192909 kubelet[2933]: W0714 22:30:11.192900 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.193003 kubelet[2933]: E0714 22:30:11.192922 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.193148 kubelet[2933]: E0714 22:30:11.193125 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.193148 kubelet[2933]: W0714 22:30:11.193138 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.193217 kubelet[2933]: E0714 22:30:11.193161 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.193461 kubelet[2933]: E0714 22:30:11.193441 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.193461 kubelet[2933]: W0714 22:30:11.193453 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.193560 kubelet[2933]: E0714 22:30:11.193470 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.193752 kubelet[2933]: E0714 22:30:11.193731 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.193752 kubelet[2933]: W0714 22:30:11.193744 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.193833 kubelet[2933]: E0714 22:30:11.193760 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.193963 kubelet[2933]: E0714 22:30:11.193950 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.193963 kubelet[2933]: W0714 22:30:11.193959 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.194037 kubelet[2933]: E0714 22:30:11.193972 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.194249 kubelet[2933]: E0714 22:30:11.194222 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.194249 kubelet[2933]: W0714 22:30:11.194240 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.194405 kubelet[2933]: E0714 22:30:11.194259 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.194639 kubelet[2933]: E0714 22:30:11.194615 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.194639 kubelet[2933]: W0714 22:30:11.194631 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.194725 kubelet[2933]: E0714 22:30:11.194652 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.194898 kubelet[2933]: E0714 22:30:11.194879 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.194898 kubelet[2933]: W0714 22:30:11.194891 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.194968 kubelet[2933]: E0714 22:30:11.194905 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.195197 kubelet[2933]: E0714 22:30:11.195173 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.195197 kubelet[2933]: W0714 22:30:11.195189 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.195251 kubelet[2933]: E0714 22:30:11.195206 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.195585 kubelet[2933]: E0714 22:30:11.195566 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.195585 kubelet[2933]: W0714 22:30:11.195580 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.195660 kubelet[2933]: E0714 22:30:11.195598 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.195850 kubelet[2933]: E0714 22:30:11.195833 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.195850 kubelet[2933]: W0714 22:30:11.195846 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.195930 kubelet[2933]: E0714 22:30:11.195860 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.196140 kubelet[2933]: E0714 22:30:11.196125 2933 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 14 22:30:11.196140 kubelet[2933]: W0714 22:30:11.196136 2933 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 14 22:30:11.196208 kubelet[2933]: E0714 22:30:11.196148 2933 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 14 22:30:11.876173 containerd[1575]: time="2025-07-14T22:30:11.876059432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:11.910955 containerd[1575]: time="2025-07-14T22:30:11.910855149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 14 22:30:11.950466 containerd[1575]: time="2025-07-14T22:30:11.950387109Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:12.005954 containerd[1575]: time="2025-07-14T22:30:12.005889684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:12.006943 containerd[1575]: time="2025-07-14T22:30:12.006876724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 8.510995531s" Jul 14 22:30:12.006943 containerd[1575]: time="2025-07-14T22:30:12.006930725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 14 22:30:12.009634 containerd[1575]: time="2025-07-14T22:30:12.009582893Z" level=info msg="CreateContainer within sandbox \"f9f7f94f039e2219b396616d315e427290c86cfd627799546ddb9efa3762390c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 14 22:30:12.265572 kubelet[2933]: E0714 22:30:12.265375 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:12.640408 containerd[1575]: time="2025-07-14T22:30:12.640315441Z" level=info msg="CreateContainer within sandbox \"f9f7f94f039e2219b396616d315e427290c86cfd627799546ddb9efa3762390c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f42c7c824f4ebeb72f0f3dee31a70e0319c9a78e2d4a4a3491661728ec06db6f\"" Jul 14 22:30:12.642790 containerd[1575]: time="2025-07-14T22:30:12.640948474Z" level=info msg="StartContainer for \"f42c7c824f4ebeb72f0f3dee31a70e0319c9a78e2d4a4a3491661728ec06db6f\"" Jul 14 22:30:13.329934 containerd[1575]: time="2025-07-14T22:30:13.329852329Z" level=info msg="StartContainer for \"f42c7c824f4ebeb72f0f3dee31a70e0319c9a78e2d4a4a3491661728ec06db6f\" returns successfully" Jul 14 22:30:13.351067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f42c7c824f4ebeb72f0f3dee31a70e0319c9a78e2d4a4a3491661728ec06db6f-rootfs.mount: Deactivated successfully. Jul 14 22:30:13.519492 containerd[1575]: time="2025-07-14T22:30:13.519443634Z" level=error msg="collecting metrics for f42c7c824f4ebeb72f0f3dee31a70e0319c9a78e2d4a4a3491661728ec06db6f" error="cgroups: cgroup deleted: unknown" Jul 14 22:30:14.067607 containerd[1575]: time="2025-07-14T22:30:14.067485431Z" level=info msg="shim disconnected" id=f42c7c824f4ebeb72f0f3dee31a70e0319c9a78e2d4a4a3491661728ec06db6f namespace=k8s.io Jul 14 22:30:14.067607 containerd[1575]: time="2025-07-14T22:30:14.067577886Z" level=warning msg="cleaning up after shim disconnected" id=f42c7c824f4ebeb72f0f3dee31a70e0319c9a78e2d4a4a3491661728ec06db6f namespace=k8s.io Jul 14 22:30:14.067607 containerd[1575]: time="2025-07-14T22:30:14.067593064Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:30:14.082057 containerd[1575]: time="2025-07-14T22:30:14.081989259Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:30:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 14 22:30:14.265738 kubelet[2933]: E0714 22:30:14.265685 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:14.877176 containerd[1575]: time="2025-07-14T22:30:14.877119837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 14 22:30:16.265939 kubelet[2933]: E0714 22:30:16.265856 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:18.265328 kubelet[2933]: E0714 22:30:18.265203 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:20.266110 kubelet[2933]: E0714 22:30:20.266021 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:22.265482 kubelet[2933]: E0714 22:30:22.265403 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:23.499300 containerd[1575]: time="2025-07-14T22:30:23.499231043Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:23.555775 containerd[1575]: time="2025-07-14T22:30:23.555675883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 14 22:30:23.718624 containerd[1575]: time="2025-07-14T22:30:23.718578337Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:23.914282 containerd[1575]: time="2025-07-14T22:30:23.914200412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:23.914905 containerd[1575]: time="2025-07-14T22:30:23.914859193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 9.037701504s" Jul 14 22:30:23.914905 containerd[1575]: time="2025-07-14T22:30:23.914899148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 14 22:30:24.048168 containerd[1575]: time="2025-07-14T22:30:24.048082421Z" level=info msg="CreateContainer within sandbox \"f9f7f94f039e2219b396616d315e427290c86cfd627799546ddb9efa3762390c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 14 22:30:24.265954 kubelet[2933]: E0714 22:30:24.265794 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:25.363676 containerd[1575]: time="2025-07-14T22:30:25.363616048Z" level=info msg="CreateContainer within sandbox \"f9f7f94f039e2219b396616d315e427290c86cfd627799546ddb9efa3762390c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"dd53d5f821d6e47e95dace4c0c24150671b7b9ca6652eeca855760f09bb3a542\"" Jul 14 22:30:25.364472 containerd[1575]: time="2025-07-14T22:30:25.364440422Z" level=info msg="StartContainer for \"dd53d5f821d6e47e95dace4c0c24150671b7b9ca6652eeca855760f09bb3a542\"" Jul 14 22:30:26.265370 kubelet[2933]: E0714 22:30:26.265272 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:26.430330 containerd[1575]: time="2025-07-14T22:30:26.430263964Z" level=info msg="StartContainer for \"dd53d5f821d6e47e95dace4c0c24150671b7b9ca6652eeca855760f09bb3a542\" returns successfully" Jul 14 22:30:28.273227 kubelet[2933]: E0714 22:30:28.273166 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:28.739324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd53d5f821d6e47e95dace4c0c24150671b7b9ca6652eeca855760f09bb3a542-rootfs.mount: Deactivated successfully. Jul 14 22:30:28.750979 containerd[1575]: time="2025-07-14T22:30:28.750903924Z" level=info msg="shim disconnected" id=dd53d5f821d6e47e95dace4c0c24150671b7b9ca6652eeca855760f09bb3a542 namespace=k8s.io Jul 14 22:30:28.751730 containerd[1575]: time="2025-07-14T22:30:28.751554769Z" level=warning msg="cleaning up after shim disconnected" id=dd53d5f821d6e47e95dace4c0c24150671b7b9ca6652eeca855760f09bb3a542 namespace=k8s.io Jul 14 22:30:28.751730 containerd[1575]: time="2025-07-14T22:30:28.751576420Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:30:28.809202 kubelet[2933]: I0714 22:30:28.809145 2933 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 14 22:30:28.909807 kubelet[2933]: I0714 22:30:28.909391 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4n9s\" (UniqueName: \"kubernetes.io/projected/e3c48e21-ba3c-4349-bbfa-eab840c18864-kube-api-access-l4n9s\") pod \"calico-kube-controllers-64d74cf67c-tmxjm\" (UID: \"e3c48e21-ba3c-4349-bbfa-eab840c18864\") " pod="calico-system/calico-kube-controllers-64d74cf67c-tmxjm" Jul 14 22:30:28.909807 kubelet[2933]: I0714 22:30:28.909461 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/53c3c39f-fb7c-417c-96f3-a751d1e4f134-config-volume\") pod \"coredns-7c65d6cfc9-4kpsm\" (UID: \"53c3c39f-fb7c-417c-96f3-a751d1e4f134\") " pod="kube-system/coredns-7c65d6cfc9-4kpsm" Jul 14 22:30:28.909807 kubelet[2933]: I0714 22:30:28.909488 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92eef89a-ebb2-46c1-949c-ac95e4738764-config-volume\") pod \"coredns-7c65d6cfc9-frkqm\" (UID: \"92eef89a-ebb2-46c1-949c-ac95e4738764\") " pod="kube-system/coredns-7c65d6cfc9-frkqm" Jul 14 22:30:28.909807 kubelet[2933]: I0714 22:30:28.909510 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nptdk\" (UniqueName: \"kubernetes.io/projected/92eef89a-ebb2-46c1-949c-ac95e4738764-kube-api-access-nptdk\") pod \"coredns-7c65d6cfc9-frkqm\" (UID: \"92eef89a-ebb2-46c1-949c-ac95e4738764\") " pod="kube-system/coredns-7c65d6cfc9-frkqm" Jul 14 22:30:28.909807 kubelet[2933]: I0714 22:30:28.909547 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/353e5854-d86d-4ff9-b078-7e4fa34f4ed2-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-5b429\" (UID: \"353e5854-d86d-4ff9-b078-7e4fa34f4ed2\") " pod="calico-system/goldmane-58fd7646b9-5b429" Jul 14 22:30:28.910134 kubelet[2933]: I0714 22:30:28.909610 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df97cf3-d658-4a6f-aa84-b00ae717886f-whisker-ca-bundle\") pod \"whisker-944fbfff-rkn5j\" (UID: \"0df97cf3-d658-4a6f-aa84-b00ae717886f\") " pod="calico-system/whisker-944fbfff-rkn5j" Jul 14 22:30:28.910134 kubelet[2933]: I0714 22:30:28.909645 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c47bs\" (UniqueName: \"kubernetes.io/projected/53c3c39f-fb7c-417c-96f3-a751d1e4f134-kube-api-access-c47bs\") pod \"coredns-7c65d6cfc9-4kpsm\" (UID: \"53c3c39f-fb7c-417c-96f3-a751d1e4f134\") " pod="kube-system/coredns-7c65d6cfc9-4kpsm" Jul 14 22:30:28.910134 kubelet[2933]: I0714 22:30:28.909671 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/353e5854-d86d-4ff9-b078-7e4fa34f4ed2-goldmane-key-pair\") pod \"goldmane-58fd7646b9-5b429\" (UID: \"353e5854-d86d-4ff9-b078-7e4fa34f4ed2\") " pod="calico-system/goldmane-58fd7646b9-5b429" Jul 14 22:30:28.910134 kubelet[2933]: I0714 22:30:28.909694 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gnmv\" (UniqueName: \"kubernetes.io/projected/0df97cf3-d658-4a6f-aa84-b00ae717886f-kube-api-access-9gnmv\") pod \"whisker-944fbfff-rkn5j\" (UID: \"0df97cf3-d658-4a6f-aa84-b00ae717886f\") " pod="calico-system/whisker-944fbfff-rkn5j" Jul 14 22:30:28.910134 kubelet[2933]: I0714 22:30:28.909719 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74d2d033-87d9-4d3f-b1f8-1b18151c4e93-calico-apiserver-certs\") pod \"calico-apiserver-7556875495-z7qk6\" (UID: \"74d2d033-87d9-4d3f-b1f8-1b18151c4e93\") " pod="calico-apiserver/calico-apiserver-7556875495-z7qk6" Jul 14 22:30:28.910314 kubelet[2933]: I0714 22:30:28.909743 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f2efa654-2e4a-4eb0-bd1d-971920483d9d-calico-apiserver-certs\") pod \"calico-apiserver-7556875495-4j7xf\" (UID: \"f2efa654-2e4a-4eb0-bd1d-971920483d9d\") " pod="calico-apiserver/calico-apiserver-7556875495-4j7xf" Jul 14 22:30:28.910314 kubelet[2933]: I0714 22:30:28.909861 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vksf8\" (UniqueName: \"kubernetes.io/projected/f2efa654-2e4a-4eb0-bd1d-971920483d9d-kube-api-access-vksf8\") pod \"calico-apiserver-7556875495-4j7xf\" (UID: \"f2efa654-2e4a-4eb0-bd1d-971920483d9d\") " pod="calico-apiserver/calico-apiserver-7556875495-4j7xf" Jul 14 22:30:28.910314 kubelet[2933]: I0714 22:30:28.909986 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0df97cf3-d658-4a6f-aa84-b00ae717886f-whisker-backend-key-pair\") pod \"whisker-944fbfff-rkn5j\" (UID: \"0df97cf3-d658-4a6f-aa84-b00ae717886f\") " pod="calico-system/whisker-944fbfff-rkn5j" Jul 14 22:30:28.910314 kubelet[2933]: I0714 22:30:28.910037 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/353e5854-d86d-4ff9-b078-7e4fa34f4ed2-config\") pod \"goldmane-58fd7646b9-5b429\" (UID: \"353e5854-d86d-4ff9-b078-7e4fa34f4ed2\") " pod="calico-system/goldmane-58fd7646b9-5b429" Jul 14 22:30:28.910314 kubelet[2933]: I0714 22:30:28.910064 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brgdf\" (UniqueName: \"kubernetes.io/projected/353e5854-d86d-4ff9-b078-7e4fa34f4ed2-kube-api-access-brgdf\") pod \"goldmane-58fd7646b9-5b429\" (UID: \"353e5854-d86d-4ff9-b078-7e4fa34f4ed2\") " pod="calico-system/goldmane-58fd7646b9-5b429" Jul 14 22:30:28.910510 kubelet[2933]: I0714 22:30:28.910130 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsgxg\" (UniqueName: \"kubernetes.io/projected/74d2d033-87d9-4d3f-b1f8-1b18151c4e93-kube-api-access-rsgxg\") pod \"calico-apiserver-7556875495-z7qk6\" (UID: \"74d2d033-87d9-4d3f-b1f8-1b18151c4e93\") " pod="calico-apiserver/calico-apiserver-7556875495-z7qk6" Jul 14 22:30:28.910510 kubelet[2933]: I0714 22:30:28.910206 2933 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3c48e21-ba3c-4349-bbfa-eab840c18864-tigera-ca-bundle\") pod \"calico-kube-controllers-64d74cf67c-tmxjm\" (UID: \"e3c48e21-ba3c-4349-bbfa-eab840c18864\") " pod="calico-system/calico-kube-controllers-64d74cf67c-tmxjm" Jul 14 22:30:29.164011 kubelet[2933]: E0714 22:30:29.163943 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:29.164178 containerd[1575]: time="2025-07-14T22:30:29.164047074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5b429,Uid:353e5854-d86d-4ff9-b078-7e4fa34f4ed2,Namespace:calico-system,Attempt:0,}" Jul 14 22:30:29.164360 containerd[1575]: time="2025-07-14T22:30:29.164306312Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4kpsm,Uid:53c3c39f-fb7c-417c-96f3-a751d1e4f134,Namespace:kube-system,Attempt:0,}" Jul 14 22:30:29.173298 containerd[1575]: time="2025-07-14T22:30:29.173244666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7556875495-4j7xf,Uid:f2efa654-2e4a-4eb0-bd1d-971920483d9d,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:30:29.178171 containerd[1575]: time="2025-07-14T22:30:29.178127572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7556875495-z7qk6,Uid:74d2d033-87d9-4d3f-b1f8-1b18151c4e93,Namespace:calico-apiserver,Attempt:0,}" Jul 14 22:30:29.183674 kubelet[2933]: E0714 22:30:29.183611 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:29.184415 containerd[1575]: time="2025-07-14T22:30:29.184122663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-frkqm,Uid:92eef89a-ebb2-46c1-949c-ac95e4738764,Namespace:kube-system,Attempt:0,}" Jul 14 22:30:29.185939 containerd[1575]: time="2025-07-14T22:30:29.185891815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d74cf67c-tmxjm,Uid:e3c48e21-ba3c-4349-bbfa-eab840c18864,Namespace:calico-system,Attempt:0,}" Jul 14 22:30:29.188612 containerd[1575]: time="2025-07-14T22:30:29.188564028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-944fbfff-rkn5j,Uid:0df97cf3-d658-4a6f-aa84-b00ae717886f,Namespace:calico-system,Attempt:0,}" Jul 14 22:30:29.277657 containerd[1575]: time="2025-07-14T22:30:29.277504663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 14 22:30:30.269486 containerd[1575]: time="2025-07-14T22:30:30.269244755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbjjj,Uid:9a5e1c4b-6531-4d21-a204-77a82ca32ab1,Namespace:calico-system,Attempt:0,}" Jul 14 22:30:30.686503 containerd[1575]: time="2025-07-14T22:30:30.686404402Z" level=error msg="Failed to destroy network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:30.686736 containerd[1575]: time="2025-07-14T22:30:30.686614658Z" level=error msg="Failed to destroy network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:30.689481 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f-shm.mount: Deactivated successfully. Jul 14 22:30:30.689757 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4-shm.mount: Deactivated successfully. Jul 14 22:30:30.694759 containerd[1575]: time="2025-07-14T22:30:30.694700145Z" level=error msg="encountered an error cleaning up failed sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:30.694817 containerd[1575]: time="2025-07-14T22:30:30.694795034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5b429,Uid:353e5854-d86d-4ff9-b078-7e4fa34f4ed2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:30.694893 containerd[1575]: time="2025-07-14T22:30:30.694701608Z" level=error msg="encountered an error cleaning up failed sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:30.694933 containerd[1575]: time="2025-07-14T22:30:30.694885595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4kpsm,Uid:53c3c39f-fb7c-417c-96f3-a751d1e4f134,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:30.695142 kubelet[2933]: E0714 22:30:30.695082 2933 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:30.695667 kubelet[2933]: E0714 22:30:30.695121 2933 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:30.695667 kubelet[2933]: E0714 22:30:30.695207 2933 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4kpsm" Jul 14 22:30:30.695667 kubelet[2933]: E0714 22:30:30.695215 2933 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-5b429" Jul 14 22:30:30.695667 kubelet[2933]: E0714 22:30:30.695242 2933 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-5b429" Jul 14 22:30:30.695815 kubelet[2933]: E0714 22:30:30.695244 2933 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4kpsm" Jul 14 22:30:30.695815 kubelet[2933]: E0714 22:30:30.695305 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-5b429_calico-system(353e5854-d86d-4ff9-b078-7e4fa34f4ed2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-5b429_calico-system(353e5854-d86d-4ff9-b078-7e4fa34f4ed2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-5b429" podUID="353e5854-d86d-4ff9-b078-7e4fa34f4ed2" Jul 14 22:30:30.695815 kubelet[2933]: E0714 22:30:30.695307 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4kpsm_kube-system(53c3c39f-fb7c-417c-96f3-a751d1e4f134)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4kpsm_kube-system(53c3c39f-fb7c-417c-96f3-a751d1e4f134)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4kpsm" podUID="53c3c39f-fb7c-417c-96f3-a751d1e4f134" Jul 14 22:30:31.281142 kubelet[2933]: I0714 22:30:31.281088 2933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:30:31.282262 containerd[1575]: time="2025-07-14T22:30:31.282191597Z" level=info msg="StopPodSandbox for \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\"" Jul 14 22:30:31.283732 kubelet[2933]: I0714 22:30:31.283604 2933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:30:31.283873 containerd[1575]: time="2025-07-14T22:30:31.283822469Z" level=info msg="Ensure that sandbox ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4 in task-service has been cleanup successfully" Jul 14 22:30:31.284609 containerd[1575]: time="2025-07-14T22:30:31.284542234Z" level=info msg="StopPodSandbox for \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\"" Jul 14 22:30:31.284796 containerd[1575]: time="2025-07-14T22:30:31.284749825Z" level=info msg="Ensure that sandbox 73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f in task-service has been cleanup successfully" Jul 14 22:30:31.314795 containerd[1575]: time="2025-07-14T22:30:31.314722392Z" level=error msg="StopPodSandbox for \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\" failed" error="failed to destroy network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:31.315158 kubelet[2933]: E0714 22:30:31.315100 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:30:31.315258 kubelet[2933]: E0714 22:30:31.315188 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4"} Jul 14 22:30:31.315299 kubelet[2933]: E0714 22:30:31.315281 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53c3c39f-fb7c-417c-96f3-a751d1e4f134\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:31.315402 kubelet[2933]: E0714 22:30:31.315314 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53c3c39f-fb7c-417c-96f3-a751d1e4f134\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4kpsm" podUID="53c3c39f-fb7c-417c-96f3-a751d1e4f134" Jul 14 22:30:31.315932 containerd[1575]: time="2025-07-14T22:30:31.315854723Z" level=error msg="StopPodSandbox for \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\" failed" error="failed to destroy network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:31.316238 kubelet[2933]: E0714 22:30:31.316182 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:30:31.316238 kubelet[2933]: E0714 22:30:31.316229 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f"} Jul 14 22:30:31.316337 kubelet[2933]: E0714 22:30:31.316261 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"353e5854-d86d-4ff9-b078-7e4fa34f4ed2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:31.316337 kubelet[2933]: E0714 22:30:31.316289 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"353e5854-d86d-4ff9-b078-7e4fa34f4ed2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-5b429" podUID="353e5854-d86d-4ff9-b078-7e4fa34f4ed2" Jul 14 22:30:31.810713 containerd[1575]: time="2025-07-14T22:30:31.810554923Z" level=error msg="Failed to destroy network for sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:31.811014 containerd[1575]: time="2025-07-14T22:30:31.810981437Z" level=error msg="encountered an error cleaning up failed sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:31.811050 containerd[1575]: time="2025-07-14T22:30:31.811033775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7556875495-4j7xf,Uid:f2efa654-2e4a-4eb0-bd1d-971920483d9d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:31.811359 kubelet[2933]: E0714 22:30:31.811274 2933 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:31.811888 kubelet[2933]: E0714 22:30:31.811374 2933 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7556875495-4j7xf" Jul 14 22:30:31.811888 kubelet[2933]: E0714 22:30:31.811396 2933 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7556875495-4j7xf" Jul 14 22:30:31.811888 kubelet[2933]: E0714 22:30:31.811447 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7556875495-4j7xf_calico-apiserver(f2efa654-2e4a-4eb0-bd1d-971920483d9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7556875495-4j7xf_calico-apiserver(f2efa654-2e4a-4eb0-bd1d-971920483d9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7556875495-4j7xf" podUID="f2efa654-2e4a-4eb0-bd1d-971920483d9d" Jul 14 22:30:31.819850 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95-shm.mount: Deactivated successfully. Jul 14 22:30:31.915889 containerd[1575]: time="2025-07-14T22:30:31.915821588Z" level=error msg="Failed to destroy network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:31.916489 containerd[1575]: time="2025-07-14T22:30:31.916448910Z" level=error msg="encountered an error cleaning up failed sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:31.916539 containerd[1575]: time="2025-07-14T22:30:31.916513110Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7556875495-z7qk6,Uid:74d2d033-87d9-4d3f-b1f8-1b18151c4e93,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:31.916872 kubelet[2933]: E0714 22:30:31.916807 2933 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:31.917115 kubelet[2933]: E0714 22:30:31.916893 2933 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7556875495-z7qk6" Jul 14 22:30:31.917115 kubelet[2933]: E0714 22:30:31.916919 2933 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7556875495-z7qk6" Jul 14 22:30:31.917115 kubelet[2933]: E0714 22:30:31.916978 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7556875495-z7qk6_calico-apiserver(74d2d033-87d9-4d3f-b1f8-1b18151c4e93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7556875495-z7qk6_calico-apiserver(74d2d033-87d9-4d3f-b1f8-1b18151c4e93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7556875495-z7qk6" podUID="74d2d033-87d9-4d3f-b1f8-1b18151c4e93" Jul 14 22:30:31.919556 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31-shm.mount: Deactivated successfully. Jul 14 22:30:32.126609 containerd[1575]: time="2025-07-14T22:30:32.126424744Z" level=error msg="Failed to destroy network for sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.126898 containerd[1575]: time="2025-07-14T22:30:32.126862890Z" level=error msg="encountered an error cleaning up failed sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.126943 containerd[1575]: time="2025-07-14T22:30:32.126910810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-frkqm,Uid:92eef89a-ebb2-46c1-949c-ac95e4738764,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.127242 kubelet[2933]: E0714 22:30:32.127175 2933 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.127450 kubelet[2933]: E0714 22:30:32.127265 2933 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-frkqm" Jul 14 22:30:32.127450 kubelet[2933]: E0714 22:30:32.127287 2933 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-frkqm" Jul 14 22:30:32.127450 kubelet[2933]: E0714 22:30:32.127363 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-frkqm_kube-system(92eef89a-ebb2-46c1-949c-ac95e4738764)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-frkqm_kube-system(92eef89a-ebb2-46c1-949c-ac95e4738764)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-frkqm" podUID="92eef89a-ebb2-46c1-949c-ac95e4738764" Jul 14 22:30:32.260593 containerd[1575]: time="2025-07-14T22:30:32.260505489Z" level=error msg="Failed to destroy network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.261503 containerd[1575]: time="2025-07-14T22:30:32.261456279Z" level=error msg="encountered an error cleaning up failed sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.261587 containerd[1575]: time="2025-07-14T22:30:32.261540708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d74cf67c-tmxjm,Uid:e3c48e21-ba3c-4349-bbfa-eab840c18864,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.261922 kubelet[2933]: E0714 22:30:32.261882 2933 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.261991 kubelet[2933]: E0714 22:30:32.261959 2933 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64d74cf67c-tmxjm" Jul 14 22:30:32.261991 kubelet[2933]: E0714 22:30:32.261980 2933 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-64d74cf67c-tmxjm" Jul 14 22:30:32.262061 kubelet[2933]: E0714 22:30:32.262025 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-64d74cf67c-tmxjm_calico-system(e3c48e21-ba3c-4349-bbfa-eab840c18864)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-64d74cf67c-tmxjm_calico-system(e3c48e21-ba3c-4349-bbfa-eab840c18864)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64d74cf67c-tmxjm" podUID="e3c48e21-ba3c-4349-bbfa-eab840c18864" Jul 14 22:30:32.288374 kubelet[2933]: I0714 22:30:32.287671 2933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:30:32.288576 containerd[1575]: time="2025-07-14T22:30:32.288336531Z" level=info msg="StopPodSandbox for \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\"" Jul 14 22:30:32.289038 containerd[1575]: time="2025-07-14T22:30:32.288633631Z" level=info msg="Ensure that sandbox 92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a in task-service has been cleanup successfully" Jul 14 22:30:32.289076 kubelet[2933]: I0714 22:30:32.288822 2933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:30:32.289718 containerd[1575]: time="2025-07-14T22:30:32.289636008Z" level=info msg="StopPodSandbox for \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\"" Jul 14 22:30:32.289892 containerd[1575]: time="2025-07-14T22:30:32.289863276Z" level=info msg="Ensure that sandbox e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04 in task-service has been cleanup successfully" Jul 14 22:30:32.291913 containerd[1575]: time="2025-07-14T22:30:32.291876618Z" level=error msg="Failed to destroy network for sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.291997 kubelet[2933]: I0714 22:30:32.291942 2933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:30:32.292809 containerd[1575]: time="2025-07-14T22:30:32.292779237Z" level=info msg="StopPodSandbox for \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\"" Jul 14 22:30:32.292945 containerd[1575]: time="2025-07-14T22:30:32.292924281Z" level=info msg="Ensure that sandbox 0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31 in task-service has been cleanup successfully" Jul 14 22:30:32.293267 kubelet[2933]: I0714 22:30:32.293235 2933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:30:32.294071 containerd[1575]: time="2025-07-14T22:30:32.293713426Z" level=info msg="StopPodSandbox for \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\"" Jul 14 22:30:32.294071 containerd[1575]: time="2025-07-14T22:30:32.293844773Z" level=info msg="Ensure that sandbox 595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95 in task-service has been cleanup successfully" Jul 14 22:30:32.296954 containerd[1575]: time="2025-07-14T22:30:32.294281085Z" level=error msg="encountered an error cleaning up failed sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.296954 containerd[1575]: time="2025-07-14T22:30:32.296874680Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-944fbfff-rkn5j,Uid:0df97cf3-d658-4a6f-aa84-b00ae717886f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.297238 kubelet[2933]: E0714 22:30:32.297196 2933 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.297299 kubelet[2933]: E0714 22:30:32.297257 2933 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-944fbfff-rkn5j" Jul 14 22:30:32.297299 kubelet[2933]: E0714 22:30:32.297278 2933 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-944fbfff-rkn5j" Jul 14 22:30:32.297479 kubelet[2933]: E0714 22:30:32.297320 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-944fbfff-rkn5j_calico-system(0df97cf3-d658-4a6f-aa84-b00ae717886f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-944fbfff-rkn5j_calico-system(0df97cf3-d658-4a6f-aa84-b00ae717886f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-944fbfff-rkn5j" podUID="0df97cf3-d658-4a6f-aa84-b00ae717886f" Jul 14 22:30:32.335233 containerd[1575]: time="2025-07-14T22:30:32.334506038Z" level=error msg="StopPodSandbox for \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\" failed" error="failed to destroy network for sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.335496 kubelet[2933]: E0714 22:30:32.334755 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:30:32.335496 kubelet[2933]: E0714 22:30:32.334811 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95"} Jul 14 22:30:32.335496 kubelet[2933]: E0714 22:30:32.334845 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2efa654-2e4a-4eb0-bd1d-971920483d9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:32.335496 kubelet[2933]: E0714 22:30:32.334868 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2efa654-2e4a-4eb0-bd1d-971920483d9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7556875495-4j7xf" podUID="f2efa654-2e4a-4eb0-bd1d-971920483d9d" Jul 14 22:30:32.336375 containerd[1575]: time="2025-07-14T22:30:32.336290709Z" level=error msg="StopPodSandbox for \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\" failed" error="failed to destroy network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.336743 kubelet[2933]: E0714 22:30:32.336677 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:30:32.336795 kubelet[2933]: E0714 22:30:32.336744 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31"} Jul 14 22:30:32.336795 kubelet[2933]: E0714 22:30:32.336784 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74d2d033-87d9-4d3f-b1f8-1b18151c4e93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:32.336880 kubelet[2933]: E0714 22:30:32.336817 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74d2d033-87d9-4d3f-b1f8-1b18151c4e93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7556875495-z7qk6" podUID="74d2d033-87d9-4d3f-b1f8-1b18151c4e93" Jul 14 22:30:32.338855 containerd[1575]: time="2025-07-14T22:30:32.338784546Z" level=error msg="StopPodSandbox for \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\" failed" error="failed to destroy network for sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.339241 kubelet[2933]: E0714 22:30:32.339190 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:30:32.339315 kubelet[2933]: E0714 22:30:32.339260 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a"} Jul 14 22:30:32.339315 kubelet[2933]: E0714 22:30:32.339308 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"92eef89a-ebb2-46c1-949c-ac95e4738764\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:32.339603 kubelet[2933]: E0714 22:30:32.339408 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"92eef89a-ebb2-46c1-949c-ac95e4738764\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-frkqm" podUID="92eef89a-ebb2-46c1-949c-ac95e4738764" Jul 14 22:30:32.342387 containerd[1575]: time="2025-07-14T22:30:32.342323189Z" level=error msg="StopPodSandbox for \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\" failed" error="failed to destroy network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.342573 kubelet[2933]: E0714 22:30:32.342529 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:30:32.342712 kubelet[2933]: E0714 22:30:32.342583 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04"} Jul 14 22:30:32.342712 kubelet[2933]: E0714 22:30:32.342612 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3c48e21-ba3c-4349-bbfa-eab840c18864\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:32.342712 kubelet[2933]: E0714 22:30:32.342649 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3c48e21-ba3c-4349-bbfa-eab840c18864\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64d74cf67c-tmxjm" podUID="e3c48e21-ba3c-4349-bbfa-eab840c18864" Jul 14 22:30:32.626683 containerd[1575]: time="2025-07-14T22:30:32.626593644Z" level=error msg="Failed to destroy network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.627037 containerd[1575]: time="2025-07-14T22:30:32.627001283Z" level=error msg="encountered an error cleaning up failed sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.627083 containerd[1575]: time="2025-07-14T22:30:32.627056155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbjjj,Uid:9a5e1c4b-6531-4d21-a204-77a82ca32ab1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.627386 kubelet[2933]: E0714 22:30:32.627316 2933 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:32.627444 kubelet[2933]: E0714 22:30:32.627421 2933 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbjjj" Jul 14 22:30:32.627472 kubelet[2933]: E0714 22:30:32.627448 2933 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbjjj" Jul 14 22:30:32.627529 kubelet[2933]: E0714 22:30:32.627496 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kbjjj_calico-system(9a5e1c4b-6531-4d21-a204-77a82ca32ab1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kbjjj_calico-system(9a5e1c4b-6531-4d21-a204-77a82ca32ab1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:32.819534 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39-shm.mount: Deactivated successfully. Jul 14 22:30:32.819762 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04-shm.mount: Deactivated successfully. Jul 14 22:30:32.819932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a-shm.mount: Deactivated successfully. Jul 14 22:30:33.296241 kubelet[2933]: I0714 22:30:33.296200 2933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:30:33.297468 containerd[1575]: time="2025-07-14T22:30:33.296951909Z" level=info msg="StopPodSandbox for \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\"" Jul 14 22:30:33.297468 containerd[1575]: time="2025-07-14T22:30:33.297154531Z" level=info msg="Ensure that sandbox 733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad in task-service has been cleanup successfully" Jul 14 22:30:33.297860 kubelet[2933]: I0714 22:30:33.297433 2933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:30:33.298073 containerd[1575]: time="2025-07-14T22:30:33.298025551Z" level=info msg="StopPodSandbox for \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\"" Jul 14 22:30:33.298230 containerd[1575]: time="2025-07-14T22:30:33.298205841Z" level=info msg="Ensure that sandbox 7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39 in task-service has been cleanup successfully" Jul 14 22:30:33.328246 containerd[1575]: time="2025-07-14T22:30:33.328166892Z" level=error msg="StopPodSandbox for \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\" failed" error="failed to destroy network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:33.328627 kubelet[2933]: E0714 22:30:33.328555 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:30:33.328772 kubelet[2933]: E0714 22:30:33.328637 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad"} Jul 14 22:30:33.328772 kubelet[2933]: E0714 22:30:33.328686 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a5e1c4b-6531-4d21-a204-77a82ca32ab1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:33.328772 kubelet[2933]: E0714 22:30:33.328717 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a5e1c4b-6531-4d21-a204-77a82ca32ab1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:33.469484 containerd[1575]: time="2025-07-14T22:30:33.469422423Z" level=error msg="StopPodSandbox for \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\" failed" error="failed to destroy network for sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:33.469790 kubelet[2933]: E0714 22:30:33.469724 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:30:33.469848 kubelet[2933]: E0714 22:30:33.469802 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39"} Jul 14 22:30:33.469885 kubelet[2933]: E0714 22:30:33.469844 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0df97cf3-d658-4a6f-aa84-b00ae717886f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:33.469979 kubelet[2933]: E0714 22:30:33.469875 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0df97cf3-d658-4a6f-aa84-b00ae717886f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-944fbfff-rkn5j" podUID="0df97cf3-d658-4a6f-aa84-b00ae717886f" Jul 14 22:30:34.266198 kubelet[2933]: E0714 22:30:34.266133 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:39.953561 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3426040035.mount: Deactivated successfully. Jul 14 22:30:42.266627 containerd[1575]: time="2025-07-14T22:30:42.266563963Z" level=info msg="StopPodSandbox for \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\"" Jul 14 22:30:42.267209 containerd[1575]: time="2025-07-14T22:30:42.266564053Z" level=info msg="StopPodSandbox for \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\"" Jul 14 22:30:42.312649 containerd[1575]: time="2025-07-14T22:30:42.312505037Z" level=error msg="StopPodSandbox for \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\" failed" error="failed to destroy network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:42.312962 kubelet[2933]: E0714 22:30:42.312892 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:30:42.314062 kubelet[2933]: E0714 22:30:42.313001 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4"} Jul 14 22:30:42.314062 kubelet[2933]: E0714 22:30:42.313060 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"53c3c39f-fb7c-417c-96f3-a751d1e4f134\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:42.314062 kubelet[2933]: E0714 22:30:42.313093 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"53c3c39f-fb7c-417c-96f3-a751d1e4f134\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4kpsm" podUID="53c3c39f-fb7c-417c-96f3-a751d1e4f134" Jul 14 22:30:42.317384 containerd[1575]: time="2025-07-14T22:30:42.317309831Z" level=error msg="StopPodSandbox for \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\" failed" error="failed to destroy network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:42.317582 kubelet[2933]: E0714 22:30:42.317532 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:30:42.317635 kubelet[2933]: E0714 22:30:42.317590 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f"} Jul 14 22:30:42.317684 kubelet[2933]: E0714 22:30:42.317631 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"353e5854-d86d-4ff9-b078-7e4fa34f4ed2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:42.317684 kubelet[2933]: E0714 22:30:42.317666 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"353e5854-d86d-4ff9-b078-7e4fa34f4ed2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-5b429" podUID="353e5854-d86d-4ff9-b078-7e4fa34f4ed2" Jul 14 22:30:42.934445 containerd[1575]: time="2025-07-14T22:30:42.934329408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:43.152983 containerd[1575]: time="2025-07-14T22:30:43.152895929Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 14 22:30:43.267394 containerd[1575]: time="2025-07-14T22:30:43.267215536Z" level=info msg="StopPodSandbox for \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\"" Jul 14 22:30:43.267873 containerd[1575]: time="2025-07-14T22:30:43.267440791Z" level=info msg="StopPodSandbox for \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\"" Jul 14 22:30:43.314576 containerd[1575]: time="2025-07-14T22:30:43.314494741Z" level=error msg="StopPodSandbox for \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\" failed" error="failed to destroy network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:43.314833 kubelet[2933]: E0714 22:30:43.314790 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:30:43.315259 kubelet[2933]: E0714 22:30:43.314849 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04"} Jul 14 22:30:43.315259 kubelet[2933]: E0714 22:30:43.314888 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3c48e21-ba3c-4349-bbfa-eab840c18864\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:43.315259 kubelet[2933]: E0714 22:30:43.314912 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3c48e21-ba3c-4349-bbfa-eab840c18864\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-64d74cf67c-tmxjm" podUID="e3c48e21-ba3c-4349-bbfa-eab840c18864" Jul 14 22:30:43.319726 containerd[1575]: time="2025-07-14T22:30:43.319503795Z" level=error msg="StopPodSandbox for \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\" failed" error="failed to destroy network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:43.319800 kubelet[2933]: E0714 22:30:43.319634 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:30:43.319800 kubelet[2933]: E0714 22:30:43.319668 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31"} Jul 14 22:30:43.319800 kubelet[2933]: E0714 22:30:43.319705 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74d2d033-87d9-4d3f-b1f8-1b18151c4e93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:43.319800 kubelet[2933]: E0714 22:30:43.319735 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74d2d033-87d9-4d3f-b1f8-1b18151c4e93\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7556875495-z7qk6" podUID="74d2d033-87d9-4d3f-b1f8-1b18151c4e93" Jul 14 22:30:43.838981 containerd[1575]: time="2025-07-14T22:30:43.838834180Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:43.883838 containerd[1575]: time="2025-07-14T22:30:43.883742746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:43.884620 containerd[1575]: time="2025-07-14T22:30:43.884560722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 14.607002818s" Jul 14 22:30:43.884711 containerd[1575]: time="2025-07-14T22:30:43.884627360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 14 22:30:43.989124 containerd[1575]: time="2025-07-14T22:30:43.989045393Z" level=info msg="CreateContainer within sandbox \"f9f7f94f039e2219b396616d315e427290c86cfd627799546ddb9efa3762390c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 14 22:30:44.267075 containerd[1575]: time="2025-07-14T22:30:44.266188887Z" level=info msg="StopPodSandbox for \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\"" Jul 14 22:30:44.298496 containerd[1575]: time="2025-07-14T22:30:44.298412534Z" level=error msg="StopPodSandbox for \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\" failed" error="failed to destroy network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 14 22:30:44.299090 kubelet[2933]: E0714 22:30:44.298815 2933 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:30:44.299090 kubelet[2933]: E0714 22:30:44.299055 2933 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad"} Jul 14 22:30:44.299165 kubelet[2933]: E0714 22:30:44.299104 2933 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a5e1c4b-6531-4d21-a204-77a82ca32ab1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 14 22:30:44.299165 kubelet[2933]: E0714 22:30:44.299136 2933 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a5e1c4b-6531-4d21-a204-77a82ca32ab1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kbjjj" podUID="9a5e1c4b-6531-4d21-a204-77a82ca32ab1" Jul 14 22:30:44.300871 containerd[1575]: time="2025-07-14T22:30:44.300782681Z" level=info msg="CreateContainer within sandbox \"f9f7f94f039e2219b396616d315e427290c86cfd627799546ddb9efa3762390c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e0f1c04baf06a4560db9543e90e9e5fb2049243a8255fa0bd8c22c4f9c47e26f\"" Jul 14 22:30:44.301513 containerd[1575]: time="2025-07-14T22:30:44.301402856Z" level=info msg="StartContainer for \"e0f1c04baf06a4560db9543e90e9e5fb2049243a8255fa0bd8c22c4f9c47e26f\"" Jul 14 22:30:44.584438 containerd[1575]: time="2025-07-14T22:30:44.584366156Z" level=info msg="StartContainer for \"e0f1c04baf06a4560db9543e90e9e5fb2049243a8255fa0bd8c22c4f9c47e26f\" returns successfully" Jul 14 22:30:44.650923 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 14 22:30:44.651094 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 14 22:30:45.265953 containerd[1575]: time="2025-07-14T22:30:45.265879442Z" level=info msg="StopPodSandbox for \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\"" Jul 14 22:30:45.415705 systemd[1]: run-containerd-runc-k8s.io-e0f1c04baf06a4560db9543e90e9e5fb2049243a8255fa0bd8c22c4f9c47e26f-runc.MFIKEo.mount: Deactivated successfully. Jul 14 22:30:45.459411 kubelet[2933]: I0714 22:30:45.459315 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x8b2g" podStartSLOduration=2.758237999 podStartE2EDuration="47.459296801s" podCreationTimestamp="2025-07-14 22:29:58 +0000 UTC" firstStartedPulling="2025-07-14 22:29:59.184473662 +0000 UTC m=+46.067722068" lastFinishedPulling="2025-07-14 22:30:43.885532464 +0000 UTC m=+90.768780870" observedRunningTime="2025-07-14 22:30:45.458705112 +0000 UTC m=+92.341953518" watchObservedRunningTime="2025-07-14 22:30:45.459296801 +0000 UTC m=+92.342545207" Jul 14 22:30:46.266285 kubelet[2933]: E0714 22:30:46.266185 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.115 [INFO][4366] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.116 [INFO][4366] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" iface="eth0" netns="/var/run/netns/cni-18f3c874-6492-9a11-48d3-7f753d56de16" Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.116 [INFO][4366] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" iface="eth0" netns="/var/run/netns/cni-18f3c874-6492-9a11-48d3-7f753d56de16" Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.116 [INFO][4366] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" iface="eth0" netns="/var/run/netns/cni-18f3c874-6492-9a11-48d3-7f753d56de16" Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.116 [INFO][4366] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.116 [INFO][4366] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.571 [INFO][4395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" HandleID="k8s-pod-network.7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.573 [INFO][4395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.575 [INFO][4395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.685 [WARNING][4395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" HandleID="k8s-pod-network.7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.685 [INFO][4395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" HandleID="k8s-pod-network.7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.687 [INFO][4395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:30:46.695637 containerd[1575]: 2025-07-14 22:30:46.692 [INFO][4366] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:30:46.696624 containerd[1575]: time="2025-07-14T22:30:46.695893344Z" level=info msg="TearDown network for sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\" successfully" Jul 14 22:30:46.696624 containerd[1575]: time="2025-07-14T22:30:46.695934072Z" level=info msg="StopPodSandbox for \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\" returns successfully" Jul 14 22:30:46.696919 containerd[1575]: time="2025-07-14T22:30:46.696887227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-944fbfff-rkn5j,Uid:0df97cf3-d658-4a6f-aa84-b00ae717886f,Namespace:calico-system,Attempt:1,}" Jul 14 22:30:46.699469 systemd[1]: run-netns-cni\x2d18f3c874\x2d6492\x2d9a11\x2d48d3\x2d7f753d56de16.mount: Deactivated successfully. Jul 14 22:30:47.250582 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:46470.service - OpenSSH per-connection server daemon (10.0.0.1:46470). Jul 14 22:30:47.266108 kubelet[2933]: E0714 22:30:47.265776 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:47.266689 containerd[1575]: time="2025-07-14T22:30:47.266195286Z" level=info msg="StopPodSandbox for \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\"" Jul 14 22:30:47.266731 containerd[1575]: time="2025-07-14T22:30:47.266693936Z" level=info msg="StopPodSandbox for \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\"" Jul 14 22:30:47.441325 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 46470 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:30:47.446484 sshd[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:30:47.460269 systemd-logind[1548]: New session 8 of user core. Jul 14 22:30:47.470764 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 14 22:30:47.853798 systemd-networkd[1240]: cali990cc895aa0: Link UP Jul 14 22:30:47.854377 systemd-networkd[1240]: cali990cc895aa0: Gained carrier Jul 14 22:30:47.871104 sshd[4451]: pam_unix(sshd:session): session closed for user core Jul 14 22:30:47.875987 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:46470.service: Deactivated successfully. Jul 14 22:30:47.878670 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Jul 14 22:30:47.878760 systemd[1]: session-8.scope: Deactivated successfully. Jul 14 22:30:47.879837 systemd-logind[1548]: Removed session 8. Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.573 [INFO][4474] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.574 [INFO][4474] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" iface="eth0" netns="/var/run/netns/cni-a79b50ea-827e-3da5-dac2-6f032a1da308" Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.575 [INFO][4474] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" iface="eth0" netns="/var/run/netns/cni-a79b50ea-827e-3da5-dac2-6f032a1da308" Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.576 [INFO][4474] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" iface="eth0" netns="/var/run/netns/cni-a79b50ea-827e-3da5-dac2-6f032a1da308" Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.576 [INFO][4474] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.576 [INFO][4474] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.641 [INFO][4593] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" HandleID="k8s-pod-network.595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.643 [INFO][4593] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.831 [INFO][4593] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.836 [WARNING][4593] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" HandleID="k8s-pod-network.595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.837 [INFO][4593] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" HandleID="k8s-pod-network.595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.964 [INFO][4593] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:30:47.969940 containerd[1575]: 2025-07-14 22:30:47.966 [INFO][4474] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:30:47.970752 containerd[1575]: time="2025-07-14T22:30:47.970131676Z" level=info msg="TearDown network for sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\" successfully" Jul 14 22:30:47.970752 containerd[1575]: time="2025-07-14T22:30:47.970170711Z" level=info msg="StopPodSandbox for \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\" returns successfully" Jul 14 22:30:47.973661 containerd[1575]: time="2025-07-14T22:30:47.973629329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7556875495-4j7xf,Uid:f2efa654-2e4a-4eb0-bd1d-971920483d9d,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:30:47.974221 systemd[1]: run-netns-cni\x2da79b50ea\x2d827e\x2d3da5\x2ddac2\x2d6f032a1da308.mount: Deactivated successfully. Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.125 [INFO][4437] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.238 [INFO][4437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--944fbfff--rkn5j-eth0 whisker-944fbfff- calico-system 0df97cf3-d658-4a6f-aa84-b00ae717886f 1032 0 2025-07-14 22:30:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:944fbfff projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-944fbfff-rkn5j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali990cc895aa0 [] [] }} ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Namespace="calico-system" Pod="whisker-944fbfff-rkn5j" WorkloadEndpoint="localhost-k8s-whisker--944fbfff--rkn5j-" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.238 [INFO][4437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Namespace="calico-system" Pod="whisker-944fbfff-rkn5j" WorkloadEndpoint="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.358 [INFO][4490] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.358 [INFO][4490] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135520), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-944fbfff-rkn5j", "timestamp":"2025-07-14 22:30:47.358003081 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.358 [INFO][4490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.358 [INFO][4490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.358 [INFO][4490] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.588 [INFO][4490] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" host="localhost" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.683 [INFO][4490] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.753 [INFO][4490] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.755 [INFO][4490] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.757 [INFO][4490] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.757 [INFO][4490] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" host="localhost" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.761 [INFO][4490] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452 Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.778 [INFO][4490] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" host="localhost" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.831 [INFO][4490] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" host="localhost" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.831 [INFO][4490] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" host="localhost" Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.831 [INFO][4490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:30:48.002544 containerd[1575]: 2025-07-14 22:30:47.831 [INFO][4490] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:30:48.003184 containerd[1575]: 2025-07-14 22:30:47.835 [INFO][4437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Namespace="calico-system" Pod="whisker-944fbfff-rkn5j" WorkloadEndpoint="localhost-k8s-whisker--944fbfff--rkn5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--944fbfff--rkn5j-eth0", GenerateName:"whisker-944fbfff-", Namespace:"calico-system", SelfLink:"", UID:"0df97cf3-d658-4a6f-aa84-b00ae717886f", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 30, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"944fbfff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-944fbfff-rkn5j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali990cc895aa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:30:48.003184 containerd[1575]: 2025-07-14 22:30:47.835 [INFO][4437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Namespace="calico-system" Pod="whisker-944fbfff-rkn5j" WorkloadEndpoint="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:30:48.003184 containerd[1575]: 2025-07-14 22:30:47.835 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali990cc895aa0 ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Namespace="calico-system" Pod="whisker-944fbfff-rkn5j" WorkloadEndpoint="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:30:48.003184 containerd[1575]: 2025-07-14 22:30:47.859 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Namespace="calico-system" Pod="whisker-944fbfff-rkn5j" WorkloadEndpoint="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:30:48.003184 containerd[1575]: 2025-07-14 22:30:47.861 [INFO][4437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Namespace="calico-system" Pod="whisker-944fbfff-rkn5j" WorkloadEndpoint="localhost-k8s-whisker--944fbfff--rkn5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--944fbfff--rkn5j-eth0", GenerateName:"whisker-944fbfff-", Namespace:"calico-system", SelfLink:"", UID:"0df97cf3-d658-4a6f-aa84-b00ae717886f", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 30, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"944fbfff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452", Pod:"whisker-944fbfff-rkn5j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali990cc895aa0", MAC:"c6:6d:e8:8a:07:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:30:48.003184 containerd[1575]: 2025-07-14 22:30:47.999 [INFO][4437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Namespace="calico-system" Pod="whisker-944fbfff-rkn5j" WorkloadEndpoint="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:30:48.302384 kernel: bpftool[4676]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 14 22:30:48.338417 containerd[1575]: time="2025-07-14T22:30:48.336844543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:30:48.338417 containerd[1575]: time="2025-07-14T22:30:48.336922564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:30:48.338417 containerd[1575]: time="2025-07-14T22:30:48.336933745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:30:48.338417 containerd[1575]: time="2025-07-14T22:30:48.337033136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:47.574 [INFO][4473] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:47.576 [INFO][4473] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" iface="eth0" netns="/var/run/netns/cni-cdb2fe9a-fe44-7cf5-deec-f296902f6c8c" Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:47.577 [INFO][4473] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" iface="eth0" netns="/var/run/netns/cni-cdb2fe9a-fe44-7cf5-deec-f296902f6c8c" Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:47.588 [INFO][4473] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" iface="eth0" netns="/var/run/netns/cni-cdb2fe9a-fe44-7cf5-deec-f296902f6c8c" Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:47.588 [INFO][4473] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:47.589 [INFO][4473] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:47.673 [INFO][4605] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" HandleID="k8s-pod-network.92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:47.674 [INFO][4605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:47.963 [INFO][4605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:48.325 [WARNING][4605] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" HandleID="k8s-pod-network.92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:48.325 [INFO][4605] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" HandleID="k8s-pod-network.92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:48.327 [INFO][4605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:30:48.352566 containerd[1575]: 2025-07-14 22:30:48.341 [INFO][4473] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:30:48.355364 containerd[1575]: time="2025-07-14T22:30:48.354542861Z" level=info msg="TearDown network for sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\" successfully" Jul 14 22:30:48.355364 containerd[1575]: time="2025-07-14T22:30:48.354584752Z" level=info msg="StopPodSandbox for \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\" returns successfully" Jul 14 22:30:48.355450 kubelet[2933]: E0714 22:30:48.355004 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:48.356504 systemd[1]: run-netns-cni\x2dcdb2fe9a\x2dfe44\x2d7cf5\x2ddeec\x2df296902f6c8c.mount: Deactivated successfully. Jul 14 22:30:48.357723 containerd[1575]: time="2025-07-14T22:30:48.357685449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-frkqm,Uid:92eef89a-ebb2-46c1-949c-ac95e4738764,Namespace:kube-system,Attempt:1,}" Jul 14 22:30:48.388679 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:30:48.422827 containerd[1575]: time="2025-07-14T22:30:48.422766418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-944fbfff-rkn5j,Uid:0df97cf3-d658-4a6f-aa84-b00ae717886f,Namespace:calico-system,Attempt:1,} returns sandbox id \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\"" Jul 14 22:30:48.425156 containerd[1575]: time="2025-07-14T22:30:48.425051846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 14 22:30:48.638769 systemd-networkd[1240]: vxlan.calico: Link UP Jul 14 22:30:48.638781 systemd-networkd[1240]: vxlan.calico: Gained carrier Jul 14 22:30:48.933493 systemd-networkd[1240]: cali990cc895aa0: Gained IPv6LL Jul 14 22:30:49.803575 systemd-networkd[1240]: cali0b9517f2649: Link UP Jul 14 22:30:49.805394 systemd-networkd[1240]: cali0b9517f2649: Gained carrier Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.601 [INFO][4794] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0 calico-apiserver-7556875495- calico-apiserver f2efa654-2e4a-4eb0-bd1d-971920483d9d 1075 0 2025-07-14 22:29:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7556875495 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7556875495-4j7xf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0b9517f2649 [] [] }} ContainerID="e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-4j7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--4j7xf-" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.601 [INFO][4794] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-4j7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.641 [INFO][4810] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" HandleID="k8s-pod-network.e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.641 [INFO][4810] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" HandleID="k8s-pod-network.e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f4e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7556875495-4j7xf", "timestamp":"2025-07-14 22:30:49.641276889 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.641 [INFO][4810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.641 [INFO][4810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.641 [INFO][4810] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.650 [INFO][4810] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" host="localhost" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.676 [INFO][4810] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.681 [INFO][4810] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.682 [INFO][4810] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.684 [INFO][4810] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.684 [INFO][4810] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" host="localhost" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.686 [INFO][4810] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343 Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.703 [INFO][4810] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" host="localhost" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.796 [INFO][4810] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" host="localhost" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.796 [INFO][4810] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" host="localhost" Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.796 [INFO][4810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:30:49.831551 containerd[1575]: 2025-07-14 22:30:49.796 [INFO][4810] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" HandleID="k8s-pod-network.e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:30:49.832793 containerd[1575]: 2025-07-14 22:30:49.799 [INFO][4794] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-4j7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0", GenerateName:"calico-apiserver-7556875495-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2efa654-2e4a-4eb0-bd1d-971920483d9d", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7556875495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7556875495-4j7xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b9517f2649", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:30:49.832793 containerd[1575]: 2025-07-14 22:30:49.800 [INFO][4794] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-4j7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:30:49.832793 containerd[1575]: 2025-07-14 22:30:49.800 [INFO][4794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0b9517f2649 ContainerID="e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-4j7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:30:49.832793 containerd[1575]: 2025-07-14 22:30:49.805 [INFO][4794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-4j7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:30:49.832793 containerd[1575]: 2025-07-14 22:30:49.806 [INFO][4794] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-4j7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0", GenerateName:"calico-apiserver-7556875495-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2efa654-2e4a-4eb0-bd1d-971920483d9d", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7556875495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343", Pod:"calico-apiserver-7556875495-4j7xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b9517f2649", MAC:"e2:b9:29:13:78:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:30:49.832793 containerd[1575]: 2025-07-14 22:30:49.827 [INFO][4794] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-4j7xf" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:30:49.908680 containerd[1575]: time="2025-07-14T22:30:49.908570086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:30:49.908987 containerd[1575]: time="2025-07-14T22:30:49.908684776Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:30:49.908987 containerd[1575]: time="2025-07-14T22:30:49.908721486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:30:49.910526 containerd[1575]: time="2025-07-14T22:30:49.910454971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:30:49.942671 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:30:49.978387 containerd[1575]: time="2025-07-14T22:30:49.978304960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7556875495-4j7xf,Uid:f2efa654-2e4a-4eb0-bd1d-971920483d9d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343\"" Jul 14 22:30:50.342603 systemd-networkd[1240]: vxlan.calico: Gained IPv6LL Jul 14 22:30:50.480973 systemd-networkd[1240]: cali3cb0230cc91: Link UP Jul 14 22:30:50.483383 systemd-networkd[1240]: cali3cb0230cc91: Gained carrier Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:49.958 [INFO][4844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0 coredns-7c65d6cfc9- kube-system 92eef89a-ebb2-46c1-949c-ac95e4738764 1074 0 2025-07-14 22:29:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-frkqm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3cb0230cc91 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frkqm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--frkqm-" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:49.958 [INFO][4844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frkqm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.002 [INFO][4874] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" HandleID="k8s-pod-network.35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.002 [INFO][4874] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" HandleID="k8s-pod-network.35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00014c2a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-frkqm", "timestamp":"2025-07-14 22:30:50.002501168 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.002 [INFO][4874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.003 [INFO][4874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.003 [INFO][4874] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.078 [INFO][4874] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" host="localhost" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.083 [INFO][4874] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.087 [INFO][4874] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.089 [INFO][4874] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.091 [INFO][4874] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.091 [INFO][4874] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" host="localhost" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.092 [INFO][4874] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524 Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.306 [INFO][4874] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" host="localhost" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.474 [INFO][4874] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" host="localhost" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.474 [INFO][4874] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" host="localhost" Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.474 [INFO][4874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:30:50.763362 containerd[1575]: 2025-07-14 22:30:50.474 [INFO][4874] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" HandleID="k8s-pod-network.35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:30:50.764020 containerd[1575]: 2025-07-14 22:30:50.478 [INFO][4844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frkqm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"92eef89a-ebb2-46c1-949c-ac95e4738764", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-frkqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cb0230cc91", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:30:50.764020 containerd[1575]: 2025-07-14 22:30:50.478 [INFO][4844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frkqm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:30:50.764020 containerd[1575]: 2025-07-14 22:30:50.478 [INFO][4844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3cb0230cc91 ContainerID="35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frkqm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:30:50.764020 containerd[1575]: 2025-07-14 22:30:50.484 [INFO][4844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frkqm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:30:50.764020 containerd[1575]: 2025-07-14 22:30:50.484 [INFO][4844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frkqm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"92eef89a-ebb2-46c1-949c-ac95e4738764", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524", Pod:"coredns-7c65d6cfc9-frkqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cb0230cc91", MAC:"8e:72:99:ea:3e:d6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:30:50.764020 containerd[1575]: 2025-07-14 22:30:50.758 [INFO][4844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524" Namespace="kube-system" Pod="coredns-7c65d6cfc9-frkqm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:30:51.032525 containerd[1575]: time="2025-07-14T22:30:51.032380530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:30:51.032525 containerd[1575]: time="2025-07-14T22:30:51.032441928Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:30:51.032525 containerd[1575]: time="2025-07-14T22:30:51.032456536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:30:51.033070 containerd[1575]: time="2025-07-14T22:30:51.032562379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:30:51.066776 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:30:51.095334 containerd[1575]: time="2025-07-14T22:30:51.095292198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-frkqm,Uid:92eef89a-ebb2-46c1-949c-ac95e4738764,Namespace:kube-system,Attempt:1,} returns sandbox id \"35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524\"" Jul 14 22:30:51.096510 kubelet[2933]: E0714 22:30:51.096478 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:51.099325 containerd[1575]: time="2025-07-14T22:30:51.099276859Z" level=info msg="CreateContainer within sandbox \"35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:30:51.429710 systemd-networkd[1240]: cali0b9517f2649: Gained IPv6LL Jul 14 22:30:51.790643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount700247000.mount: Deactivated successfully. Jul 14 22:30:51.794397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount21500793.mount: Deactivated successfully. Jul 14 22:30:52.116717 containerd[1575]: time="2025-07-14T22:30:52.116515387Z" level=info msg="CreateContainer within sandbox \"35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b3984047241c5444a717cfd64f66cc53e5f4c92f0c82f4ea2ab175a68b997bbb\"" Jul 14 22:30:52.117500 containerd[1575]: time="2025-07-14T22:30:52.117422941Z" level=info msg="StartContainer for \"b3984047241c5444a717cfd64f66cc53e5f4c92f0c82f4ea2ab175a68b997bbb\"" Jul 14 22:30:52.427184 containerd[1575]: time="2025-07-14T22:30:52.427004014Z" level=info msg="StartContainer for \"b3984047241c5444a717cfd64f66cc53e5f4c92f0c82f4ea2ab175a68b997bbb\" returns successfully" Jul 14 22:30:52.518608 systemd-networkd[1240]: cali3cb0230cc91: Gained IPv6LL Jul 14 22:30:53.432181 kubelet[2933]: E0714 22:30:53.432146 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:53.851674 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:46600.service - OpenSSH per-connection server daemon (10.0.0.1:46600). Jul 14 22:30:53.967472 sshd[4976]: Accepted publickey for core from 10.0.0.1 port 46600 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:30:53.969397 sshd[4976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:30:53.975102 systemd-logind[1548]: New session 9 of user core. Jul 14 22:30:53.982654 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 14 22:30:54.011038 kubelet[2933]: I0714 22:30:54.010940 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-frkqm" podStartSLOduration=95.010880677 podStartE2EDuration="1m35.010880677s" podCreationTimestamp="2025-07-14 22:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:30:54.010840539 +0000 UTC m=+100.894088945" watchObservedRunningTime="2025-07-14 22:30:54.010880677 +0000 UTC m=+100.894129083" Jul 14 22:30:54.266641 containerd[1575]: time="2025-07-14T22:30:54.266432545Z" level=info msg="StopPodSandbox for \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\"" Jul 14 22:30:54.267214 containerd[1575]: time="2025-07-14T22:30:54.267188597Z" level=info msg="StopPodSandbox for \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\"" Jul 14 22:30:54.324735 sshd[4976]: pam_unix(sshd:session): session closed for user core Jul 14 22:30:54.413007 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:46600.service: Deactivated successfully. Jul 14 22:30:54.415622 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Jul 14 22:30:54.415732 systemd[1]: session-9.scope: Deactivated successfully. Jul 14 22:30:54.416845 systemd-logind[1548]: Removed session 9. Jul 14 22:30:54.434906 kubelet[2933]: E0714 22:30:54.434828 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.472 [INFO][5020] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.473 [INFO][5020] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" iface="eth0" netns="/var/run/netns/cni-c3cae5be-8070-f0b2-c82d-e955e1f6fbf7" Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.473 [INFO][5020] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" iface="eth0" netns="/var/run/netns/cni-c3cae5be-8070-f0b2-c82d-e955e1f6fbf7" Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.473 [INFO][5020] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" iface="eth0" netns="/var/run/netns/cni-c3cae5be-8070-f0b2-c82d-e955e1f6fbf7" Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.473 [INFO][5020] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.473 [INFO][5020] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.531 [INFO][5041] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" HandleID="k8s-pod-network.e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.532 [INFO][5041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.532 [INFO][5041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.712 [WARNING][5041] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" HandleID="k8s-pod-network.e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.712 [INFO][5041] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" HandleID="k8s-pod-network.e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.718 [INFO][5041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:30:54.725196 containerd[1575]: 2025-07-14 22:30:54.721 [INFO][5020] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:30:54.726488 containerd[1575]: time="2025-07-14T22:30:54.725761349Z" level=info msg="TearDown network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\" successfully" Jul 14 22:30:54.726488 containerd[1575]: time="2025-07-14T22:30:54.726485329Z" level=info msg="StopPodSandbox for \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\" returns successfully" Jul 14 22:30:54.729488 systemd[1]: run-netns-cni\x2dc3cae5be\x2d8070\x2df0b2\x2dc82d\x2de955e1f6fbf7.mount: Deactivated successfully. Jul 14 22:30:54.729733 containerd[1575]: time="2025-07-14T22:30:54.729572581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d74cf67c-tmxjm,Uid:e3c48e21-ba3c-4349-bbfa-eab840c18864,Namespace:calico-system,Attempt:1,}" Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.703 [INFO][5025] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.703 [INFO][5025] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" iface="eth0" netns="/var/run/netns/cni-168f44f2-9ee8-642c-47aa-d79a04eb83be" Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.703 [INFO][5025] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" iface="eth0" netns="/var/run/netns/cni-168f44f2-9ee8-642c-47aa-d79a04eb83be" Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.704 [INFO][5025] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" iface="eth0" netns="/var/run/netns/cni-168f44f2-9ee8-642c-47aa-d79a04eb83be" Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.704 [INFO][5025] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.704 [INFO][5025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.729 [INFO][5050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" HandleID="k8s-pod-network.0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.729 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.729 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.735 [WARNING][5050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" HandleID="k8s-pod-network.0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.735 [INFO][5050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" HandleID="k8s-pod-network.0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.736 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:30:54.742971 containerd[1575]: 2025-07-14 22:30:54.739 [INFO][5025] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:30:54.744738 containerd[1575]: time="2025-07-14T22:30:54.743145174Z" level=info msg="TearDown network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\" successfully" Jul 14 22:30:54.744738 containerd[1575]: time="2025-07-14T22:30:54.743171174Z" level=info msg="StopPodSandbox for \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\" returns successfully" Jul 14 22:30:54.745143 containerd[1575]: time="2025-07-14T22:30:54.745101552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7556875495-z7qk6,Uid:74d2d033-87d9-4d3f-b1f8-1b18151c4e93,Namespace:calico-apiserver,Attempt:1,}" Jul 14 22:30:54.751080 systemd[1]: run-netns-cni\x2d168f44f2\x2d9ee8\x2d642c\x2d47aa\x2dd79a04eb83be.mount: Deactivated successfully. Jul 14 22:30:55.266960 kubelet[2933]: E0714 22:30:55.266513 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:55.267505 containerd[1575]: time="2025-07-14T22:30:55.267467124Z" level=info msg="StopPodSandbox for \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\"" Jul 14 22:30:55.268328 containerd[1575]: time="2025-07-14T22:30:55.268029323Z" level=info msg="StopPodSandbox for \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\"" Jul 14 22:30:56.268609 containerd[1575]: time="2025-07-14T22:30:56.268219841Z" level=info msg="StopPodSandbox for \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\"" Jul 14 22:30:59.126595 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:45688.service - OpenSSH per-connection server daemon (10.0.0.1:45688). Jul 14 22:30:59.185686 kubelet[2933]: E0714 22:30:59.185450 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:58.421 [INFO][5112] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:58.421 [INFO][5112] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" iface="eth0" netns="/var/run/netns/cni-a98ba3ad-0c45-b71e-0c48-61faac105178" Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:58.427 [INFO][5112] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" iface="eth0" netns="/var/run/netns/cni-a98ba3ad-0c45-b71e-0c48-61faac105178" Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:58.428 [INFO][5112] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" iface="eth0" netns="/var/run/netns/cni-a98ba3ad-0c45-b71e-0c48-61faac105178" Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:58.428 [INFO][5112] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:58.428 [INFO][5112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:58.467 [INFO][5130] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" HandleID="k8s-pod-network.ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:58.467 [INFO][5130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:58.467 [INFO][5130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:59.099 [WARNING][5130] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" HandleID="k8s-pod-network.ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:59.099 [INFO][5130] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" HandleID="k8s-pod-network.ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:59.187 [INFO][5130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:30:59.194214 containerd[1575]: 2025-07-14 22:30:59.190 [INFO][5112] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:30:59.195418 containerd[1575]: time="2025-07-14T22:30:59.194409267Z" level=info msg="TearDown network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\" successfully" Jul 14 22:30:59.195418 containerd[1575]: time="2025-07-14T22:30:59.194436629Z" level=info msg="StopPodSandbox for \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\" returns successfully" Jul 14 22:30:59.195418 containerd[1575]: time="2025-07-14T22:30:59.195201417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4kpsm,Uid:53c3c39f-fb7c-417c-96f3-a751d1e4f134,Namespace:kube-system,Attempt:1,}" Jul 14 22:30:59.195582 kubelet[2933]: E0714 22:30:59.194772 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:30:59.198094 systemd[1]: run-netns-cni\x2da98ba3ad\x2d0c45\x2db71e\x2d0c48\x2d61faac105178.mount: Deactivated successfully. Jul 14 22:30:59.662694 containerd[1575]: time="2025-07-14T22:30:59.662583058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:30:59.737337 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 45688 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:30:59.767931 sshd[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:30:59.785782 systemd-logind[1548]: New session 10 of user core. Jul 14 22:30:59.806655 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:58.420 [INFO][5085] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:58.420 [INFO][5085] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" iface="eth0" netns="/var/run/netns/cni-2f07a9d9-b960-d6ac-0071-1b6af9894e86" Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:58.421 [INFO][5085] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" iface="eth0" netns="/var/run/netns/cni-2f07a9d9-b960-d6ac-0071-1b6af9894e86" Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:58.422 [INFO][5085] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" iface="eth0" netns="/var/run/netns/cni-2f07a9d9-b960-d6ac-0071-1b6af9894e86" Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:58.422 [INFO][5085] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:58.422 [INFO][5085] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:58.482 [INFO][5123] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" HandleID="k8s-pod-network.733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:58.482 [INFO][5123] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:59.187 [INFO][5123] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:59.726 [WARNING][5123] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" HandleID="k8s-pod-network.733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:59.726 [INFO][5123] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" HandleID="k8s-pod-network.733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:59.889 [INFO][5123] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:30:59.895774 containerd[1575]: 2025-07-14 22:30:59.893 [INFO][5085] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:30:59.897647 containerd[1575]: time="2025-07-14T22:30:59.897453040Z" level=info msg="TearDown network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\" successfully" Jul 14 22:30:59.897647 containerd[1575]: time="2025-07-14T22:30:59.897496724Z" level=info msg="StopPodSandbox for \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\" returns successfully" Jul 14 22:30:59.899086 containerd[1575]: time="2025-07-14T22:30:59.898626943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbjjj,Uid:9a5e1c4b-6531-4d21-a204-77a82ca32ab1,Namespace:calico-system,Attempt:1,}" Jul 14 22:30:59.899794 systemd[1]: run-netns-cni\x2d2f07a9d9\x2db960\x2dd6ac\x2d0071\x2d1b6af9894e86.mount: Deactivated successfully. Jul 14 22:30:59.972070 containerd[1575]: time="2025-07-14T22:30:59.971850834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 14 22:31:00.121788 sshd[5174]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:00.126736 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:45688.service: Deactivated successfully. Jul 14 22:31:00.131823 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Jul 14 22:31:00.132753 systemd[1]: session-10.scope: Deactivated successfully. Jul 14 22:31:00.133870 systemd-logind[1548]: Removed session 10. Jul 14 22:31:00.190020 containerd[1575]: time="2025-07-14T22:31:00.189932286Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:30:58.425 [INFO][5086] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:30:58.425 [INFO][5086] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" iface="eth0" netns="/var/run/netns/cni-d569f88d-4b0b-6407-65d3-4e434e2618fc" Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:30:58.427 [INFO][5086] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" iface="eth0" netns="/var/run/netns/cni-d569f88d-4b0b-6407-65d3-4e434e2618fc" Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:30:58.429 [INFO][5086] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" iface="eth0" netns="/var/run/netns/cni-d569f88d-4b0b-6407-65d3-4e434e2618fc" Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:30:58.429 [INFO][5086] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:30:58.429 [INFO][5086] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:30:58.508 [INFO][5132] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" HandleID="k8s-pod-network.73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:30:58.508 [INFO][5132] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:30:59.889 [INFO][5132] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:31:00.107 [WARNING][5132] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" HandleID="k8s-pod-network.73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:31:00.107 [INFO][5132] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" HandleID="k8s-pod-network.73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:31:00.264 [INFO][5132] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:00.272982 containerd[1575]: 2025-07-14 22:31:00.270 [INFO][5086] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:00.278629 containerd[1575]: time="2025-07-14T22:31:00.278480905Z" level=info msg="TearDown network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\" successfully" Jul 14 22:31:00.278629 containerd[1575]: time="2025-07-14T22:31:00.278522906Z" level=info msg="StopPodSandbox for \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\" returns successfully" Jul 14 22:31:00.280554 systemd[1]: run-netns-cni\x2dd569f88d\x2d4b0b\x2d6407\x2d65d3\x2d4e434e2618fc.mount: Deactivated successfully. Jul 14 22:31:00.284147 containerd[1575]: time="2025-07-14T22:31:00.283775712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5b429,Uid:353e5854-d86d-4ff9-b078-7e4fa34f4ed2,Namespace:calico-system,Attempt:1,}" Jul 14 22:31:00.410205 containerd[1575]: time="2025-07-14T22:31:00.410104735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:00.413305 containerd[1575]: time="2025-07-14T22:31:00.413210442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 11.988106947s" Jul 14 22:31:00.413305 containerd[1575]: time="2025-07-14T22:31:00.413300565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 14 22:31:00.418467 containerd[1575]: time="2025-07-14T22:31:00.418030156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:31:00.421875 containerd[1575]: time="2025-07-14T22:31:00.421805979Z" level=info msg="CreateContainer within sandbox \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 14 22:31:00.539159 systemd-networkd[1240]: calidc9acb5be9b: Link UP Jul 14 22:31:00.541572 systemd-networkd[1240]: calidc9acb5be9b: Gained carrier Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:30:59.888 [INFO][5182] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0 calico-kube-controllers-64d74cf67c- calico-system e3c48e21-ba3c-4349-bbfa-eab840c18864 1117 0 2025-07-14 22:29:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64d74cf67c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-64d74cf67c-tmxjm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidc9acb5be9b [] [] }} ContainerID="b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" Namespace="calico-system" Pod="calico-kube-controllers-64d74cf67c-tmxjm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:30:59.889 [INFO][5182] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" Namespace="calico-system" Pod="calico-kube-controllers-64d74cf67c-tmxjm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.292 [INFO][5240] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" HandleID="k8s-pod-network.b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.293 [INFO][5240] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" HandleID="k8s-pod-network.b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b6730), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-64d74cf67c-tmxjm", "timestamp":"2025-07-14 22:31:00.292632038 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.293 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.293 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.293 [INFO][5240] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.413 [INFO][5240] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" host="localhost" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.477 [INFO][5240] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.483 [INFO][5240] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.485 [INFO][5240] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.488 [INFO][5240] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.488 [INFO][5240] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" host="localhost" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.489 [INFO][5240] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8 Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.497 [INFO][5240] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" host="localhost" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.527 [INFO][5240] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" host="localhost" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.528 [INFO][5240] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" host="localhost" Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.528 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:00.598332 containerd[1575]: 2025-07-14 22:31:00.528 [INFO][5240] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" HandleID="k8s-pod-network.b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:00.599321 containerd[1575]: 2025-07-14 22:31:00.531 [INFO][5182] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" Namespace="calico-system" Pod="calico-kube-controllers-64d74cf67c-tmxjm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0", GenerateName:"calico-kube-controllers-64d74cf67c-", Namespace:"calico-system", SelfLink:"", UID:"e3c48e21-ba3c-4349-bbfa-eab840c18864", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d74cf67c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-64d74cf67c-tmxjm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc9acb5be9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:00.599321 containerd[1575]: 2025-07-14 22:31:00.532 [INFO][5182] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" Namespace="calico-system" Pod="calico-kube-controllers-64d74cf67c-tmxjm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:00.599321 containerd[1575]: 2025-07-14 22:31:00.532 [INFO][5182] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc9acb5be9b ContainerID="b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" Namespace="calico-system" Pod="calico-kube-controllers-64d74cf67c-tmxjm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:00.599321 containerd[1575]: 2025-07-14 22:31:00.556 [INFO][5182] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" Namespace="calico-system" Pod="calico-kube-controllers-64d74cf67c-tmxjm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:00.599321 containerd[1575]: 2025-07-14 22:31:00.563 [INFO][5182] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" Namespace="calico-system" Pod="calico-kube-controllers-64d74cf67c-tmxjm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0", GenerateName:"calico-kube-controllers-64d74cf67c-", Namespace:"calico-system", SelfLink:"", UID:"e3c48e21-ba3c-4349-bbfa-eab840c18864", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d74cf67c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8", Pod:"calico-kube-controllers-64d74cf67c-tmxjm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc9acb5be9b", MAC:"1a:15:ea:2a:3c:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:00.599321 containerd[1575]: 2025-07-14 22:31:00.577 [INFO][5182] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8" Namespace="calico-system" Pod="calico-kube-controllers-64d74cf67c-tmxjm" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:00.628151 containerd[1575]: time="2025-07-14T22:31:00.628099166Z" level=info msg="CreateContainer within sandbox \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d\"" Jul 14 22:31:00.630643 containerd[1575]: time="2025-07-14T22:31:00.630609932Z" level=info msg="StartContainer for \"ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d\"" Jul 14 22:31:00.668851 systemd-networkd[1240]: cali9262eb822a4: Link UP Jul 14 22:31:00.674093 systemd-networkd[1240]: cali9262eb822a4: Gained carrier Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.338 [INFO][5224] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0 calico-apiserver-7556875495- calico-apiserver 74d2d033-87d9-4d3f-b1f8-1b18151c4e93 1118 0 2025-07-14 22:29:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7556875495 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7556875495-z7qk6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9262eb822a4 [] [] }} ContainerID="46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-z7qk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--z7qk6-" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.338 [INFO][5224] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-z7qk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.482 [INFO][5249] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" HandleID="k8s-pod-network.46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.482 [INFO][5249] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" HandleID="k8s-pod-network.46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7556875495-z7qk6", "timestamp":"2025-07-14 22:31:00.482662383 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.483 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.528 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.528 [INFO][5249] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.551 [INFO][5249] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" host="localhost" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.580 [INFO][5249] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.589 [INFO][5249] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.591 [INFO][5249] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.597 [INFO][5249] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.597 [INFO][5249] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" host="localhost" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.607 [INFO][5249] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.617 [INFO][5249] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" host="localhost" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.640 [INFO][5249] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" host="localhost" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.640 [INFO][5249] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" host="localhost" Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.640 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:00.706038 containerd[1575]: 2025-07-14 22:31:00.640 [INFO][5249] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" HandleID="k8s-pod-network.46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:00.706762 containerd[1575]: 2025-07-14 22:31:00.657 [INFO][5224] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-z7qk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0", GenerateName:"calico-apiserver-7556875495-", Namespace:"calico-apiserver", SelfLink:"", UID:"74d2d033-87d9-4d3f-b1f8-1b18151c4e93", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7556875495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7556875495-z7qk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9262eb822a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:00.706762 containerd[1575]: 2025-07-14 22:31:00.658 [INFO][5224] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-z7qk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:00.706762 containerd[1575]: 2025-07-14 22:31:00.658 [INFO][5224] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9262eb822a4 ContainerID="46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-z7qk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:00.706762 containerd[1575]: 2025-07-14 22:31:00.674 [INFO][5224] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-z7qk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:00.706762 containerd[1575]: 2025-07-14 22:31:00.676 [INFO][5224] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-z7qk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0", GenerateName:"calico-apiserver-7556875495-", Namespace:"calico-apiserver", SelfLink:"", UID:"74d2d033-87d9-4d3f-b1f8-1b18151c4e93", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7556875495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d", Pod:"calico-apiserver-7556875495-z7qk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9262eb822a4", MAC:"ee:08:9b:30:72:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:00.706762 containerd[1575]: 2025-07-14 22:31:00.690 [INFO][5224] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d" Namespace="calico-apiserver" Pod="calico-apiserver-7556875495-z7qk6" WorkloadEndpoint="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:00.708956 containerd[1575]: time="2025-07-14T22:31:00.708224510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:31:00.708956 containerd[1575]: time="2025-07-14T22:31:00.708292270Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:31:00.708956 containerd[1575]: time="2025-07-14T22:31:00.708306747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:31:00.709422 containerd[1575]: time="2025-07-14T22:31:00.708719148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:31:00.800230 systemd[1]: run-containerd-runc-k8s.io-ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d-runc.rJBu8b.mount: Deactivated successfully. Jul 14 22:31:00.818375 containerd[1575]: time="2025-07-14T22:31:00.813683313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:31:00.818375 containerd[1575]: time="2025-07-14T22:31:00.813859110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:31:00.818375 containerd[1575]: time="2025-07-14T22:31:00.814020791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:31:00.829444 containerd[1575]: time="2025-07-14T22:31:00.828568224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:31:00.859930 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:31:00.906487 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:31:00.924966 systemd-networkd[1240]: calicf27ed1f5cc: Link UP Jul 14 22:31:00.926828 systemd-networkd[1240]: calicf27ed1f5cc: Gained carrier Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.667 [INFO][5293] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--5b429-eth0 goldmane-58fd7646b9- calico-system 353e5854-d86d-4ff9-b078-7e4fa34f4ed2 1136 0 2025-07-14 22:29:58 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-5b429 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicf27ed1f5cc [] [] }} ContainerID="db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" Namespace="calico-system" Pod="goldmane-58fd7646b9-5b429" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5b429-" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.668 [INFO][5293] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" Namespace="calico-system" Pod="goldmane-58fd7646b9-5b429" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.804 [INFO][5330] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" HandleID="k8s-pod-network.db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.804 [INFO][5330] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" HandleID="k8s-pod-network.db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-5b429", "timestamp":"2025-07-14 22:31:00.803785509 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.804 [INFO][5330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.804 [INFO][5330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.804 [INFO][5330] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.813 [INFO][5330] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" host="localhost" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.837 [INFO][5330] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.853 [INFO][5330] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.856 [INFO][5330] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.860 [INFO][5330] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.860 [INFO][5330] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" host="localhost" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.863 [INFO][5330] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025 Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.870 [INFO][5330] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" host="localhost" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.910 [INFO][5330] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" host="localhost" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.911 [INFO][5330] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" host="localhost" Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.911 [INFO][5330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:00.964795 containerd[1575]: 2025-07-14 22:31:00.911 [INFO][5330] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" HandleID="k8s-pod-network.db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:00.966377 containerd[1575]: 2025-07-14 22:31:00.918 [INFO][5293] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" Namespace="calico-system" Pod="goldmane-58fd7646b9-5b429" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5b429-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"353e5854-d86d-4ff9-b078-7e4fa34f4ed2", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-5b429", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf27ed1f5cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:00.966377 containerd[1575]: 2025-07-14 22:31:00.918 [INFO][5293] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" Namespace="calico-system" Pod="goldmane-58fd7646b9-5b429" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:00.966377 containerd[1575]: 2025-07-14 22:31:00.919 [INFO][5293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf27ed1f5cc ContainerID="db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" Namespace="calico-system" Pod="goldmane-58fd7646b9-5b429" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:00.966377 containerd[1575]: 2025-07-14 22:31:00.926 [INFO][5293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" Namespace="calico-system" Pod="goldmane-58fd7646b9-5b429" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:00.966377 containerd[1575]: 2025-07-14 22:31:00.931 [INFO][5293] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" Namespace="calico-system" Pod="goldmane-58fd7646b9-5b429" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5b429-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"353e5854-d86d-4ff9-b078-7e4fa34f4ed2", ResourceVersion:"1136", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025", Pod:"goldmane-58fd7646b9-5b429", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf27ed1f5cc", MAC:"4a:0f:96:ec:c9:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:00.966377 containerd[1575]: 2025-07-14 22:31:00.953 [INFO][5293] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025" Namespace="calico-system" Pod="goldmane-58fd7646b9-5b429" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:00.998792 containerd[1575]: time="2025-07-14T22:31:00.998622323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7556875495-z7qk6,Uid:74d2d033-87d9-4d3f-b1f8-1b18151c4e93,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d\"" Jul 14 22:31:01.002856 containerd[1575]: time="2025-07-14T22:31:01.002787642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64d74cf67c-tmxjm,Uid:e3c48e21-ba3c-4349-bbfa-eab840c18864,Namespace:calico-system,Attempt:1,} returns sandbox id \"b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8\"" Jul 14 22:31:01.094229 containerd[1575]: time="2025-07-14T22:31:01.093965260Z" level=info msg="StartContainer for \"ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d\" returns successfully" Jul 14 22:31:01.115036 containerd[1575]: time="2025-07-14T22:31:01.113509918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:31:01.115036 containerd[1575]: time="2025-07-14T22:31:01.113583840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:31:01.115036 containerd[1575]: time="2025-07-14T22:31:01.113599259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:31:01.115036 containerd[1575]: time="2025-07-14T22:31:01.113696025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:31:01.130803 systemd-networkd[1240]: cali9f74ee199ca: Link UP Jul 14 22:31:01.133202 systemd-networkd[1240]: cali9f74ee199ca: Gained carrier Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.708 [INFO][5258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0 coredns-7c65d6cfc9- kube-system 53c3c39f-fb7c-417c-96f3-a751d1e4f134 1135 0 2025-07-14 22:29:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-4kpsm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9f74ee199ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kpsm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4kpsm-" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.708 [INFO][5258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kpsm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.849 [INFO][5355] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" HandleID="k8s-pod-network.5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.849 [INFO][5355] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" HandleID="k8s-pod-network.5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000385830), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-4kpsm", "timestamp":"2025-07-14 22:31:00.84949429 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.849 [INFO][5355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.911 [INFO][5355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.911 [INFO][5355] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.920 [INFO][5355] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" host="localhost" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.940 [INFO][5355] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.955 [INFO][5355] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.964 [INFO][5355] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.973 [INFO][5355] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.973 [INFO][5355] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" host="localhost" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.980 [INFO][5355] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04 Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:00.991 [INFO][5355] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" host="localhost" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:01.101 [INFO][5355] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" host="localhost" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:01.101 [INFO][5355] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" host="localhost" Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:01.102 [INFO][5355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:01.168351 containerd[1575]: 2025-07-14 22:31:01.102 [INFO][5355] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" HandleID="k8s-pod-network.5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:01.169788 containerd[1575]: 2025-07-14 22:31:01.115 [INFO][5258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kpsm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"53c3c39f-fb7c-417c-96f3-a751d1e4f134", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-4kpsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f74ee199ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:01.169788 containerd[1575]: 2025-07-14 22:31:01.118 [INFO][5258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kpsm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:01.169788 containerd[1575]: 2025-07-14 22:31:01.118 [INFO][5258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9f74ee199ca ContainerID="5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kpsm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:01.169788 containerd[1575]: 2025-07-14 22:31:01.139 [INFO][5258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kpsm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:01.169788 containerd[1575]: 2025-07-14 22:31:01.147 [INFO][5258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kpsm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"53c3c39f-fb7c-417c-96f3-a751d1e4f134", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04", Pod:"coredns-7c65d6cfc9-4kpsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f74ee199ca", MAC:"36:a7:69:bd:36:84", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:01.169788 containerd[1575]: 2025-07-14 22:31:01.161 [INFO][5258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4kpsm" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:01.203325 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:31:01.212698 containerd[1575]: time="2025-07-14T22:31:01.212400918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:31:01.212698 containerd[1575]: time="2025-07-14T22:31:01.212477905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:31:01.212698 containerd[1575]: time="2025-07-14T22:31:01.212511229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:31:01.219048 containerd[1575]: time="2025-07-14T22:31:01.217788448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:31:01.232636 systemd-networkd[1240]: calia733482647b: Link UP Jul 14 22:31:01.233247 systemd-networkd[1240]: calia733482647b: Gained carrier Jul 14 22:31:01.259569 containerd[1575]: time="2025-07-14T22:31:01.259426182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-5b429,Uid:353e5854-d86d-4ff9-b078-7e4fa34f4ed2,Namespace:calico-system,Attempt:1,} returns sandbox id \"db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025\"" Jul 14 22:31:01.275379 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:00.760 [INFO][5273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kbjjj-eth0 csi-node-driver- calico-system 9a5e1c4b-6531-4d21-a204-77a82ca32ab1 1134 0 2025-07-14 22:29:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kbjjj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia733482647b [] [] }} ContainerID="41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" Namespace="calico-system" Pod="csi-node-driver-kbjjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbjjj-" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:00.767 [INFO][5273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" Namespace="calico-system" Pod="csi-node-driver-kbjjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:00.912 [INFO][5396] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" HandleID="k8s-pod-network.41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:00.912 [INFO][5396] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" HandleID="k8s-pod-network.41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kbjjj", "timestamp":"2025-07-14 22:31:00.912499065 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:00.912 [INFO][5396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.102 [INFO][5396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.103 [INFO][5396] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.123 [INFO][5396] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" host="localhost" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.155 [INFO][5396] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.166 [INFO][5396] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.171 [INFO][5396] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.184 [INFO][5396] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.185 [INFO][5396] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" host="localhost" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.191 [INFO][5396] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414 Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.200 [INFO][5396] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" host="localhost" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.211 [INFO][5396] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" host="localhost" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.211 [INFO][5396] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" host="localhost" Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.211 [INFO][5396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:01.281137 containerd[1575]: 2025-07-14 22:31:01.211 [INFO][5396] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" HandleID="k8s-pod-network.41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:01.283631 containerd[1575]: 2025-07-14 22:31:01.222 [INFO][5273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" Namespace="calico-system" Pod="csi-node-driver-kbjjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbjjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kbjjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a5e1c4b-6531-4d21-a204-77a82ca32ab1", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kbjjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia733482647b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:01.283631 containerd[1575]: 2025-07-14 22:31:01.222 [INFO][5273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" Namespace="calico-system" Pod="csi-node-driver-kbjjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:01.283631 containerd[1575]: 2025-07-14 22:31:01.222 [INFO][5273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia733482647b ContainerID="41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" Namespace="calico-system" Pod="csi-node-driver-kbjjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:01.283631 containerd[1575]: 2025-07-14 22:31:01.235 [INFO][5273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" Namespace="calico-system" Pod="csi-node-driver-kbjjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:01.283631 containerd[1575]: 2025-07-14 22:31:01.238 [INFO][5273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" Namespace="calico-system" Pod="csi-node-driver-kbjjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbjjj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kbjjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a5e1c4b-6531-4d21-a204-77a82ca32ab1", ResourceVersion:"1134", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414", Pod:"csi-node-driver-kbjjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia733482647b", MAC:"62:3d:23:b9:2f:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:01.283631 containerd[1575]: 2025-07-14 22:31:01.273 [INFO][5273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414" Namespace="calico-system" Pod="csi-node-driver-kbjjj" WorkloadEndpoint="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:01.317438 containerd[1575]: time="2025-07-14T22:31:01.316369403Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 14 22:31:01.317438 containerd[1575]: time="2025-07-14T22:31:01.316435971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 14 22:31:01.317438 containerd[1575]: time="2025-07-14T22:31:01.316459015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:31:01.317438 containerd[1575]: time="2025-07-14T22:31:01.316557253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 14 22:31:01.321520 containerd[1575]: time="2025-07-14T22:31:01.321451449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4kpsm,Uid:53c3c39f-fb7c-417c-96f3-a751d1e4f134,Namespace:kube-system,Attempt:1,} returns sandbox id \"5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04\"" Jul 14 22:31:01.323290 kubelet[2933]: E0714 22:31:01.323248 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:31:01.328389 containerd[1575]: time="2025-07-14T22:31:01.327317637Z" level=info msg="CreateContainer within sandbox \"5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 14 22:31:01.353156 systemd-resolved[1462]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 14 22:31:01.373464 containerd[1575]: time="2025-07-14T22:31:01.373365296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbjjj,Uid:9a5e1c4b-6531-4d21-a204-77a82ca32ab1,Namespace:calico-system,Attempt:1,} returns sandbox id \"41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414\"" Jul 14 22:31:01.382258 containerd[1575]: time="2025-07-14T22:31:01.382207542Z" level=info msg="CreateContainer within sandbox \"5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b454b0d2b589bc3cf0bf5678c0f49bdffc11ffe3ecab8c0a9d7ee96c00b5a635\"" Jul 14 22:31:01.383325 containerd[1575]: time="2025-07-14T22:31:01.383279287Z" level=info msg="StartContainer for \"b454b0d2b589bc3cf0bf5678c0f49bdffc11ffe3ecab8c0a9d7ee96c00b5a635\"" Jul 14 22:31:01.458851 containerd[1575]: time="2025-07-14T22:31:01.458797047Z" level=info msg="StartContainer for \"b454b0d2b589bc3cf0bf5678c0f49bdffc11ffe3ecab8c0a9d7ee96c00b5a635\" returns successfully" Jul 14 22:31:01.475828 kubelet[2933]: E0714 22:31:01.474740 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:31:01.498314 kubelet[2933]: I0714 22:31:01.498239 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4kpsm" podStartSLOduration=102.49821354 podStartE2EDuration="1m42.49821354s" podCreationTimestamp="2025-07-14 22:29:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-14 22:31:01.496311874 +0000 UTC m=+108.379560281" watchObservedRunningTime="2025-07-14 22:31:01.49821354 +0000 UTC m=+108.381461946" Jul 14 22:31:02.373574 systemd-networkd[1240]: calia733482647b: Gained IPv6LL Jul 14 22:31:02.502193 kubelet[2933]: E0714 22:31:02.502141 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:31:02.565540 systemd-networkd[1240]: calidc9acb5be9b: Gained IPv6LL Jul 14 22:31:02.694573 systemd-networkd[1240]: cali9262eb822a4: Gained IPv6LL Jul 14 22:31:02.757512 systemd-networkd[1240]: calicf27ed1f5cc: Gained IPv6LL Jul 14 22:31:03.141611 systemd-networkd[1240]: cali9f74ee199ca: Gained IPv6LL Jul 14 22:31:03.503911 kubelet[2933]: E0714 22:31:03.503763 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:31:04.357529 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:04.359604 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:04.357574 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:05.136628 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:45694.service - OpenSSH per-connection server daemon (10.0.0.1:45694). Jul 14 22:31:06.405495 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:06.405504 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:06.407374 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:07.017416 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 45694 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:07.019550 sshd[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:07.024136 systemd-logind[1548]: New session 11 of user core. Jul 14 22:31:07.029649 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 14 22:31:07.439221 sshd[5679]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:07.445705 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:45694.service: Deactivated successfully. Jul 14 22:31:07.448097 systemd[1]: session-11.scope: Deactivated successfully. Jul 14 22:31:07.448764 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Jul 14 22:31:07.449667 systemd-logind[1548]: Removed session 11. Jul 14 22:31:08.453505 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:08.481510 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:08.453515 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:09.189134 containerd[1575]: time="2025-07-14T22:31:09.189028120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:09.307695 containerd[1575]: time="2025-07-14T22:31:09.307583948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 14 22:31:09.395959 containerd[1575]: time="2025-07-14T22:31:09.395887101Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:09.551923 containerd[1575]: time="2025-07-14T22:31:09.551767329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:09.553075 containerd[1575]: time="2025-07-14T22:31:09.552731945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 9.134490052s" Jul 14 22:31:09.553075 containerd[1575]: time="2025-07-14T22:31:09.552786379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:31:09.554423 containerd[1575]: time="2025-07-14T22:31:09.554389818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 14 22:31:09.555928 containerd[1575]: time="2025-07-14T22:31:09.555887464Z" level=info msg="CreateContainer within sandbox \"e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:31:11.111998 containerd[1575]: time="2025-07-14T22:31:11.111916325Z" level=info msg="CreateContainer within sandbox \"e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fef0c31a329fa7df27c2fdf1352aad10b9b795de8b59ae14626e1a0f60d5016a\"" Jul 14 22:31:11.112723 containerd[1575]: time="2025-07-14T22:31:11.112584344Z" level=info msg="StartContainer for \"fef0c31a329fa7df27c2fdf1352aad10b9b795de8b59ae14626e1a0f60d5016a\"" Jul 14 22:31:11.150748 systemd[1]: run-containerd-runc-k8s.io-fef0c31a329fa7df27c2fdf1352aad10b9b795de8b59ae14626e1a0f60d5016a-runc.omNs9r.mount: Deactivated successfully. Jul 14 22:31:11.407148 containerd[1575]: time="2025-07-14T22:31:11.406938050Z" level=info msg="StartContainer for \"fef0c31a329fa7df27c2fdf1352aad10b9b795de8b59ae14626e1a0f60d5016a\" returns successfully" Jul 14 22:31:11.428165 containerd[1575]: time="2025-07-14T22:31:11.428000166Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:11.461496 containerd[1575]: time="2025-07-14T22:31:11.461372704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 14 22:31:11.464332 containerd[1575]: time="2025-07-14T22:31:11.464297409Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 1.909873477s" Jul 14 22:31:11.464407 containerd[1575]: time="2025-07-14T22:31:11.464335242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 14 22:31:11.465767 containerd[1575]: time="2025-07-14T22:31:11.465579763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 14 22:31:11.468174 containerd[1575]: time="2025-07-14T22:31:11.468142005Z" level=info msg="CreateContainer within sandbox \"46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 14 22:31:11.563481 kubelet[2933]: I0714 22:31:11.562855 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7556875495-4j7xf" podStartSLOduration=60.988364007 podStartE2EDuration="1m20.56281283s" podCreationTimestamp="2025-07-14 22:29:51 +0000 UTC" firstStartedPulling="2025-07-14 22:30:49.97966143 +0000 UTC m=+96.862909836" lastFinishedPulling="2025-07-14 22:31:09.554110253 +0000 UTC m=+116.437358659" observedRunningTime="2025-07-14 22:31:11.562692068 +0000 UTC m=+118.445940474" watchObservedRunningTime="2025-07-14 22:31:11.56281283 +0000 UTC m=+118.446061236" Jul 14 22:31:11.607879 containerd[1575]: time="2025-07-14T22:31:11.607806802Z" level=info msg="CreateContainer within sandbox \"46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a381d4daac53e9da583fe8ac34276998bce143f95747e931028d2bf9686f871\"" Jul 14 22:31:11.609658 containerd[1575]: time="2025-07-14T22:31:11.609617697Z" level=info msg="StartContainer for \"1a381d4daac53e9da583fe8ac34276998bce143f95747e931028d2bf9686f871\"" Jul 14 22:31:11.792499 containerd[1575]: time="2025-07-14T22:31:11.792316596Z" level=info msg="StartContainer for \"1a381d4daac53e9da583fe8ac34276998bce143f95747e931028d2bf9686f871\" returns successfully" Jul 14 22:31:12.451808 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:36496.service - OpenSSH per-connection server daemon (10.0.0.1:36496). Jul 14 22:31:12.491095 sshd[5791]: Accepted publickey for core from 10.0.0.1 port 36496 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:12.493407 sshd[5791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:12.497727 systemd-logind[1548]: New session 12 of user core. Jul 14 22:31:12.510777 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 14 22:31:12.615882 kubelet[2933]: I0714 22:31:12.615780 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7556875495-z7qk6" podStartSLOduration=71.151471627 podStartE2EDuration="1m21.615751843s" podCreationTimestamp="2025-07-14 22:29:51 +0000 UTC" firstStartedPulling="2025-07-14 22:31:01.001104674 +0000 UTC m=+107.884353080" lastFinishedPulling="2025-07-14 22:31:11.46538487 +0000 UTC m=+118.348633296" observedRunningTime="2025-07-14 22:31:12.613860075 +0000 UTC m=+119.497108491" watchObservedRunningTime="2025-07-14 22:31:12.615751843 +0000 UTC m=+119.499000249" Jul 14 22:31:12.810933 sshd[5791]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:12.816911 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:36496.service: Deactivated successfully. Jul 14 22:31:12.822559 systemd[1]: session-12.scope: Deactivated successfully. Jul 14 22:31:12.824262 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Jul 14 22:31:12.825519 systemd-logind[1548]: Removed session 12. Jul 14 22:31:13.342284 containerd[1575]: time="2025-07-14T22:31:13.342179187Z" level=info msg="StopPodSandbox for \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\"" Jul 14 22:31:13.528700 kubelet[2933]: I0714 22:31:13.528641 2933 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.634 [WARNING][5821] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5b429-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"353e5854-d86d-4ff9-b078-7e4fa34f4ed2", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025", Pod:"goldmane-58fd7646b9-5b429", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf27ed1f5cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.635 [INFO][5821] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.635 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" iface="eth0" netns="" Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.635 [INFO][5821] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.635 [INFO][5821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.657 [INFO][5830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" HandleID="k8s-pod-network.73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.657 [INFO][5830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.657 [INFO][5830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.810 [WARNING][5830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" HandleID="k8s-pod-network.73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.810 [INFO][5830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" HandleID="k8s-pod-network.73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.811 [INFO][5830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:13.818437 containerd[1575]: 2025-07-14 22:31:13.814 [INFO][5821] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:13.819034 containerd[1575]: time="2025-07-14T22:31:13.818505436Z" level=info msg="TearDown network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\" successfully" Jul 14 22:31:13.819034 containerd[1575]: time="2025-07-14T22:31:13.818539020Z" level=info msg="StopPodSandbox for \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\" returns successfully" Jul 14 22:31:13.819431 containerd[1575]: time="2025-07-14T22:31:13.819334331Z" level=info msg="RemovePodSandbox for \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\"" Jul 14 22:31:13.822425 containerd[1575]: time="2025-07-14T22:31:13.822395445Z" level=info msg="Forcibly stopping sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\"" Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.928 [WARNING][5847] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--5b429-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"353e5854-d86d-4ff9-b078-7e4fa34f4ed2", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025", Pod:"goldmane-58fd7646b9-5b429", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicf27ed1f5cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.928 [INFO][5847] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.928 [INFO][5847] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" iface="eth0" netns="" Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.928 [INFO][5847] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.929 [INFO][5847] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.953 [INFO][5856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" HandleID="k8s-pod-network.73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.954 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.954 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.979 [WARNING][5856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" HandleID="k8s-pod-network.73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.979 [INFO][5856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" HandleID="k8s-pod-network.73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Workload="localhost-k8s-goldmane--58fd7646b9--5b429-eth0" Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.994 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:14.001776 containerd[1575]: 2025-07-14 22:31:13.998 [INFO][5847] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f" Jul 14 22:31:14.002431 containerd[1575]: time="2025-07-14T22:31:14.002367604Z" level=info msg="TearDown network for sandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\" successfully" Jul 14 22:31:14.062678 containerd[1575]: time="2025-07-14T22:31:14.062566255Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:31:14.062960 containerd[1575]: time="2025-07-14T22:31:14.062702525Z" level=info msg="RemovePodSandbox \"73f015fb861a30ae90e7ff3fd83b2210cd9cee08299d5c7f2a0c05494dca304f\" returns successfully" Jul 14 22:31:14.064188 containerd[1575]: time="2025-07-14T22:31:14.064106400Z" level=info msg="StopPodSandbox for \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\"" Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.132 [WARNING][5874] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0", GenerateName:"calico-apiserver-7556875495-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2efa654-2e4a-4eb0-bd1d-971920483d9d", ResourceVersion:"1236", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7556875495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343", Pod:"calico-apiserver-7556875495-4j7xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b9517f2649", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.132 [INFO][5874] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.132 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" iface="eth0" netns="" Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.133 [INFO][5874] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.133 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.158 [INFO][5883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" HandleID="k8s-pod-network.595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.159 [INFO][5883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.159 [INFO][5883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.176 [WARNING][5883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" HandleID="k8s-pod-network.595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.176 [INFO][5883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" HandleID="k8s-pod-network.595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.181 [INFO][5883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:14.190063 containerd[1575]: 2025-07-14 22:31:14.186 [INFO][5874] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:31:14.190063 containerd[1575]: time="2025-07-14T22:31:14.190027223Z" level=info msg="TearDown network for sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\" successfully" Jul 14 22:31:14.190063 containerd[1575]: time="2025-07-14T22:31:14.190059264Z" level=info msg="StopPodSandbox for \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\" returns successfully" Jul 14 22:31:14.192412 containerd[1575]: time="2025-07-14T22:31:14.192105517Z" level=info msg="RemovePodSandbox for \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\"" Jul 14 22:31:14.192412 containerd[1575]: time="2025-07-14T22:31:14.192161033Z" level=info msg="Forcibly stopping sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\"" Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.247 [WARNING][5900] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0", GenerateName:"calico-apiserver-7556875495-", Namespace:"calico-apiserver", SelfLink:"", UID:"f2efa654-2e4a-4eb0-bd1d-971920483d9d", ResourceVersion:"1236", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7556875495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e65613088fa93e10da51da7562e66f6404e587fabe043eebd40670091aa2f343", Pod:"calico-apiserver-7556875495-4j7xf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0b9517f2649", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.247 [INFO][5900] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.247 [INFO][5900] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" iface="eth0" netns="" Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.247 [INFO][5900] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.248 [INFO][5900] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.279 [INFO][5908] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" HandleID="k8s-pod-network.595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.279 [INFO][5908] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.279 [INFO][5908] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.286 [WARNING][5908] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" HandleID="k8s-pod-network.595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.286 [INFO][5908] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" HandleID="k8s-pod-network.595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Workload="localhost-k8s-calico--apiserver--7556875495--4j7xf-eth0" Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.288 [INFO][5908] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:14.296584 containerd[1575]: 2025-07-14 22:31:14.292 [INFO][5900] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95" Jul 14 22:31:14.313110 containerd[1575]: time="2025-07-14T22:31:14.296650057Z" level=info msg="TearDown network for sandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\" successfully" Jul 14 22:31:14.353361 containerd[1575]: time="2025-07-14T22:31:14.353194802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:31:14.353361 containerd[1575]: time="2025-07-14T22:31:14.353303320Z" level=info msg="RemovePodSandbox \"595895137a52a96b661c9b38d1bfc2fba1837d64378ae5dce262978951dcbc95\" returns successfully" Jul 14 22:31:14.354251 containerd[1575]: time="2025-07-14T22:31:14.354013357Z" level=info msg="StopPodSandbox for \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\"" Jul 14 22:31:14.535364 kubelet[2933]: I0714 22:31:14.534819 2933 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.539 [WARNING][5925] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0", GenerateName:"calico-apiserver-7556875495-", Namespace:"calico-apiserver", SelfLink:"", UID:"74d2d033-87d9-4d3f-b1f8-1b18151c4e93", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7556875495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d", Pod:"calico-apiserver-7556875495-z7qk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9262eb822a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.539 [INFO][5925] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.539 [INFO][5925] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" iface="eth0" netns="" Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.539 [INFO][5925] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.539 [INFO][5925] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.575 [INFO][5934] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" HandleID="k8s-pod-network.0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.576 [INFO][5934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.576 [INFO][5934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.582 [WARNING][5934] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" HandleID="k8s-pod-network.0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.582 [INFO][5934] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" HandleID="k8s-pod-network.0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.584 [INFO][5934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:14.592715 containerd[1575]: 2025-07-14 22:31:14.588 [INFO][5925] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:31:14.593222 containerd[1575]: time="2025-07-14T22:31:14.592766647Z" level=info msg="TearDown network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\" successfully" Jul 14 22:31:14.593222 containerd[1575]: time="2025-07-14T22:31:14.592797606Z" level=info msg="StopPodSandbox for \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\" returns successfully" Jul 14 22:31:14.593411 containerd[1575]: time="2025-07-14T22:31:14.593338962Z" level=info msg="RemovePodSandbox for \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\"" Jul 14 22:31:14.593411 containerd[1575]: time="2025-07-14T22:31:14.593398055Z" level=info msg="Forcibly stopping sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\"" Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:14.749 [WARNING][5951] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0", GenerateName:"calico-apiserver-7556875495-", Namespace:"calico-apiserver", SelfLink:"", UID:"74d2d033-87d9-4d3f-b1f8-1b18151c4e93", ResourceVersion:"1244", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7556875495", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46d8cfaa5bd53e272b7e4a077bbbb3a87b8e4f90062218dbd669033c58e7677d", Pod:"calico-apiserver-7556875495-z7qk6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9262eb822a4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:14.749 [INFO][5951] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:14.749 [INFO][5951] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" iface="eth0" netns="" Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:14.749 [INFO][5951] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:14.749 [INFO][5951] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:14.781 [INFO][5960] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" HandleID="k8s-pod-network.0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:14.781 [INFO][5960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:14.781 [INFO][5960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:14.952 [WARNING][5960] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" HandleID="k8s-pod-network.0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:14.952 [INFO][5960] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" HandleID="k8s-pod-network.0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Workload="localhost-k8s-calico--apiserver--7556875495--z7qk6-eth0" Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:15.032 [INFO][5960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:15.046552 containerd[1575]: 2025-07-14 22:31:15.041 [INFO][5951] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31" Jul 14 22:31:15.049336 containerd[1575]: time="2025-07-14T22:31:15.046607311Z" level=info msg="TearDown network for sandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\" successfully" Jul 14 22:31:15.355038 containerd[1575]: time="2025-07-14T22:31:15.354312652Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:31:15.355038 containerd[1575]: time="2025-07-14T22:31:15.354440937Z" level=info msg="RemovePodSandbox \"0bc7127a49d17b00a6a31684dea54c8021e9810de5858b3171e75740247fdb31\" returns successfully" Jul 14 22:31:15.355865 containerd[1575]: time="2025-07-14T22:31:15.355753936Z" level=info msg="StopPodSandbox for \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\"" Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.646 [WARNING][5980] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"92eef89a-ebb2-46c1-949c-ac95e4738764", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524", Pod:"coredns-7c65d6cfc9-frkqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cb0230cc91", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.647 [INFO][5980] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.647 [INFO][5980] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" iface="eth0" netns="" Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.647 [INFO][5980] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.647 [INFO][5980] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.673 [INFO][5989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" HandleID="k8s-pod-network.92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.673 [INFO][5989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.673 [INFO][5989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.686 [WARNING][5989] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" HandleID="k8s-pod-network.92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.686 [INFO][5989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" HandleID="k8s-pod-network.92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.688 [INFO][5989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:15.695448 containerd[1575]: 2025-07-14 22:31:15.691 [INFO][5980] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:31:15.695448 containerd[1575]: time="2025-07-14T22:31:15.695322287Z" level=info msg="TearDown network for sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\" successfully" Jul 14 22:31:15.695448 containerd[1575]: time="2025-07-14T22:31:15.695378635Z" level=info msg="StopPodSandbox for \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\" returns successfully" Jul 14 22:31:16.027821 containerd[1575]: time="2025-07-14T22:31:15.696097680Z" level=info msg="RemovePodSandbox for \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\"" Jul 14 22:31:16.027821 containerd[1575]: time="2025-07-14T22:31:15.696126935Z" level=info msg="Forcibly stopping sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\"" Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.093 [WARNING][6007] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"92eef89a-ebb2-46c1-949c-ac95e4738764", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"35aa71bec5853c7d9e074a326cb9828d8b6043e72e55f2202dfc346592597524", Pod:"coredns-7c65d6cfc9-frkqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3cb0230cc91", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.094 [INFO][6007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.094 [INFO][6007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" iface="eth0" netns="" Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.094 [INFO][6007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.094 [INFO][6007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.120 [INFO][6016] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" HandleID="k8s-pod-network.92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.120 [INFO][6016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.120 [INFO][6016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.131 [WARNING][6016] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" HandleID="k8s-pod-network.92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.131 [INFO][6016] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" HandleID="k8s-pod-network.92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Workload="localhost-k8s-coredns--7c65d6cfc9--frkqm-eth0" Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.133 [INFO][6016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:16.142446 containerd[1575]: 2025-07-14 22:31:16.138 [INFO][6007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a" Jul 14 22:31:16.143061 containerd[1575]: time="2025-07-14T22:31:16.142641085Z" level=info msg="TearDown network for sandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\" successfully" Jul 14 22:31:16.162314 containerd[1575]: time="2025-07-14T22:31:16.162235786Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:31:16.163811 containerd[1575]: time="2025-07-14T22:31:16.162329855Z" level=info msg="RemovePodSandbox \"92cbcd5644c6d84fd07502984586bf7fd9580b042e5c82db671db6d32a80b15a\" returns successfully" Jul 14 22:31:16.163811 containerd[1575]: time="2025-07-14T22:31:16.163591045Z" level=info msg="StopPodSandbox for \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\"" Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.213 [WARNING][6040] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--944fbfff--rkn5j-eth0", GenerateName:"whisker-944fbfff-", Namespace:"calico-system", SelfLink:"", UID:"0df97cf3-d658-4a6f-aa84-b00ae717886f", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 30, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"944fbfff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452", Pod:"whisker-944fbfff-rkn5j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali990cc895aa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.214 [INFO][6040] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.216 [INFO][6040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" iface="eth0" netns="" Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.216 [INFO][6040] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.216 [INFO][6040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.278 [INFO][6050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" HandleID="k8s-pod-network.7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.278 [INFO][6050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.278 [INFO][6050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.286 [WARNING][6050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" HandleID="k8s-pod-network.7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.286 [INFO][6050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" HandleID="k8s-pod-network.7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.287 [INFO][6050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:16.297518 containerd[1575]: 2025-07-14 22:31:16.291 [INFO][6040] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:31:16.297518 containerd[1575]: time="2025-07-14T22:31:16.297392926Z" level=info msg="TearDown network for sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\" successfully" Jul 14 22:31:16.297518 containerd[1575]: time="2025-07-14T22:31:16.297429475Z" level=info msg="StopPodSandbox for \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\" returns successfully" Jul 14 22:31:16.298036 containerd[1575]: time="2025-07-14T22:31:16.297922328Z" level=info msg="RemovePodSandbox for \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\"" Jul 14 22:31:16.298036 containerd[1575]: time="2025-07-14T22:31:16.297963296Z" level=info msg="Forcibly stopping sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\"" Jul 14 22:31:16.390061 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:16.392685 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:16.390091 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.751 [WARNING][6067] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--944fbfff--rkn5j-eth0", GenerateName:"whisker-944fbfff-", Namespace:"calico-system", SelfLink:"", UID:"0df97cf3-d658-4a6f-aa84-b00ae717886f", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 30, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"944fbfff", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452", Pod:"whisker-944fbfff-rkn5j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali990cc895aa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.751 [INFO][6067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.751 [INFO][6067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" iface="eth0" netns="" Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.751 [INFO][6067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.751 [INFO][6067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.781 [INFO][6076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" HandleID="k8s-pod-network.7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.781 [INFO][6076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.781 [INFO][6076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.873 [WARNING][6076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" HandleID="k8s-pod-network.7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.873 [INFO][6076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" HandleID="k8s-pod-network.7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.874 [INFO][6076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:16.881292 containerd[1575]: 2025-07-14 22:31:16.878 [INFO][6067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39" Jul 14 22:31:16.882461 containerd[1575]: time="2025-07-14T22:31:16.881331285Z" level=info msg="TearDown network for sandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\" successfully" Jul 14 22:31:17.823891 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:36544.service - OpenSSH per-connection server daemon (10.0.0.1:36544). Jul 14 22:31:17.841543 containerd[1575]: time="2025-07-14T22:31:17.841472926Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:31:17.841894 containerd[1575]: time="2025-07-14T22:31:17.841848082Z" level=info msg="RemovePodSandbox \"7f5e9b4cc8d76039b5cdff14b2f2030bfe00e64187ebc750d3ff5313c6abfc39\" returns successfully" Jul 14 22:31:17.842623 containerd[1575]: time="2025-07-14T22:31:17.842489408Z" level=info msg="StopPodSandbox for \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\"" Jul 14 22:31:17.883394 sshd[6086]: Accepted publickey for core from 10.0.0.1 port 36544 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:17.888851 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:17.890722 containerd[1575]: time="2025-07-14T22:31:17.890671518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:17.914803 systemd-logind[1548]: New session 13 of user core. Jul 14 22:31:17.922305 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.897 [WARNING][6098] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kbjjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a5e1c4b-6531-4d21-a204-77a82ca32ab1", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414", Pod:"csi-node-driver-kbjjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia733482647b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.898 [INFO][6098] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.898 [INFO][6098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" iface="eth0" netns="" Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.898 [INFO][6098] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.898 [INFO][6098] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.943 [INFO][6107] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" HandleID="k8s-pod-network.733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.943 [INFO][6107] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.943 [INFO][6107] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.950 [WARNING][6107] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" HandleID="k8s-pod-network.733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.950 [INFO][6107] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" HandleID="k8s-pod-network.733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.952 [INFO][6107] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:17.962450 containerd[1575]: 2025-07-14 22:31:17.958 [INFO][6098] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:31:17.981993 containerd[1575]: time="2025-07-14T22:31:17.962496739Z" level=info msg="TearDown network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\" successfully" Jul 14 22:31:17.981993 containerd[1575]: time="2025-07-14T22:31:17.962531575Z" level=info msg="StopPodSandbox for \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\" returns successfully" Jul 14 22:31:17.981993 containerd[1575]: time="2025-07-14T22:31:17.963260628Z" level=info msg="RemovePodSandbox for \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\"" Jul 14 22:31:17.981993 containerd[1575]: time="2025-07-14T22:31:17.963295265Z" level=info msg="Forcibly stopping sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\"" Jul 14 22:31:18.034416 containerd[1575]: time="2025-07-14T22:31:18.034162462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.009 [WARNING][6127] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kbjjj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9a5e1c4b-6531-4d21-a204-77a82ca32ab1", ResourceVersion:"1184", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414", Pod:"csi-node-driver-kbjjj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia733482647b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.010 [INFO][6127] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.010 [INFO][6127] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" iface="eth0" netns="" Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.010 [INFO][6127] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.010 [INFO][6127] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.041 [INFO][6139] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" HandleID="k8s-pod-network.733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.041 [INFO][6139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.041 [INFO][6139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.067 [WARNING][6139] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" HandleID="k8s-pod-network.733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.067 [INFO][6139] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" HandleID="k8s-pod-network.733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Workload="localhost-k8s-csi--node--driver--kbjjj-eth0" Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.071 [INFO][6139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:18.078389 containerd[1575]: 2025-07-14 22:31:18.075 [INFO][6127] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad" Jul 14 22:31:18.078389 containerd[1575]: time="2025-07-14T22:31:18.078275401Z" level=info msg="TearDown network for sandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\" successfully" Jul 14 22:31:18.350403 sshd[6086]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:18.355613 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:36544.service: Deactivated successfully. Jul 14 22:31:18.358626 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Jul 14 22:31:18.358690 systemd[1]: session-13.scope: Deactivated successfully. Jul 14 22:31:18.360296 systemd-logind[1548]: Removed session 13. Jul 14 22:31:18.437480 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:18.508808 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:18.437489 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:18.636115 containerd[1575]: time="2025-07-14T22:31:18.635964622Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:19.267062 containerd[1575]: time="2025-07-14T22:31:19.266982174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:19.268150 containerd[1575]: time="2025-07-14T22:31:19.268099478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 7.802473557s" Jul 14 22:31:19.268228 containerd[1575]: time="2025-07-14T22:31:19.268148281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 14 22:31:19.269656 containerd[1575]: time="2025-07-14T22:31:19.269619031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 14 22:31:19.280247 containerd[1575]: time="2025-07-14T22:31:19.280174694Z" level=info msg="CreateContainer within sandbox \"b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 14 22:31:19.334608 containerd[1575]: time="2025-07-14T22:31:19.334511587Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:31:19.334608 containerd[1575]: time="2025-07-14T22:31:19.334606528Z" level=info msg="RemovePodSandbox \"733baa90252d3246b062b8e2382ac81cee751237f23456e058f3c0ae6ae03fad\" returns successfully" Jul 14 22:31:19.335310 containerd[1575]: time="2025-07-14T22:31:19.335253684Z" level=info msg="StopPodSandbox for \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\"" Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.494 [WARNING][6167] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"53c3c39f-fb7c-417c-96f3-a751d1e4f134", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04", Pod:"coredns-7c65d6cfc9-4kpsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f74ee199ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.494 [INFO][6167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.494 [INFO][6167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" iface="eth0" netns="" Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.494 [INFO][6167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.494 [INFO][6167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.519 [INFO][6176] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" HandleID="k8s-pod-network.ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.519 [INFO][6176] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.519 [INFO][6176] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.525 [WARNING][6176] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" HandleID="k8s-pod-network.ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.525 [INFO][6176] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" HandleID="k8s-pod-network.ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.526 [INFO][6176] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:19.533540 containerd[1575]: 2025-07-14 22:31:19.530 [INFO][6167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:31:19.534129 containerd[1575]: time="2025-07-14T22:31:19.533595260Z" level=info msg="TearDown network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\" successfully" Jul 14 22:31:19.534129 containerd[1575]: time="2025-07-14T22:31:19.533627331Z" level=info msg="StopPodSandbox for \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\" returns successfully" Jul 14 22:31:19.534363 containerd[1575]: time="2025-07-14T22:31:19.534302721Z" level=info msg="RemovePodSandbox for \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\"" Jul 14 22:31:19.534402 containerd[1575]: time="2025-07-14T22:31:19.534376021Z" level=info msg="Forcibly stopping sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\"" Jul 14 22:31:19.688864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862778742.mount: Deactivated successfully. Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.644 [WARNING][6195] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"53c3c39f-fb7c-417c-96f3-a751d1e4f134", ResourceVersion:"1202", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5cfad2f7f0a58261de3ff987eacff903c24f5c7600b985b0d3e0eab9a0bfcb04", Pod:"coredns-7c65d6cfc9-4kpsm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9f74ee199ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.644 [INFO][6195] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.644 [INFO][6195] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" iface="eth0" netns="" Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.644 [INFO][6195] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.644 [INFO][6195] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.672 [INFO][6203] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" HandleID="k8s-pod-network.ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.672 [INFO][6203] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.672 [INFO][6203] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.687 [WARNING][6203] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" HandleID="k8s-pod-network.ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.687 [INFO][6203] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" HandleID="k8s-pod-network.ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Workload="localhost-k8s-coredns--7c65d6cfc9--4kpsm-eth0" Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.690 [INFO][6203] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:19.700398 containerd[1575]: 2025-07-14 22:31:19.696 [INFO][6195] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4" Jul 14 22:31:19.700398 containerd[1575]: time="2025-07-14T22:31:19.699958922Z" level=info msg="TearDown network for sandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\" successfully" Jul 14 22:31:19.828333 containerd[1575]: time="2025-07-14T22:31:19.827950027Z" level=info msg="CreateContainer within sandbox \"b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"702beb8fe6d9e903ed35e8e134d3617598827c977d9ac757fe2f220b53494586\"" Jul 14 22:31:19.832118 containerd[1575]: time="2025-07-14T22:31:19.830610759Z" level=info msg="StartContainer for \"702beb8fe6d9e903ed35e8e134d3617598827c977d9ac757fe2f220b53494586\"" Jul 14 22:31:19.948553 containerd[1575]: time="2025-07-14T22:31:19.948479860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:31:19.948724 containerd[1575]: time="2025-07-14T22:31:19.948590171Z" level=info msg="RemovePodSandbox \"ca548e3c2796df7876727fafb143e1febff4a3344286193aa13abaa1b1f4c6b4\" returns successfully" Jul 14 22:31:19.949708 containerd[1575]: time="2025-07-14T22:31:19.949584940Z" level=info msg="StopPodSandbox for \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\"" Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.178 [WARNING][6237] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0", GenerateName:"calico-kube-controllers-64d74cf67c-", Namespace:"calico-system", SelfLink:"", UID:"e3c48e21-ba3c-4349-bbfa-eab840c18864", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d74cf67c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8", Pod:"calico-kube-controllers-64d74cf67c-tmxjm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc9acb5be9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.179 [INFO][6237] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.179 [INFO][6237] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" iface="eth0" netns="" Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.179 [INFO][6237] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.179 [INFO][6237] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.203 [INFO][6272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" HandleID="k8s-pod-network.e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.203 [INFO][6272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.203 [INFO][6272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.297 [WARNING][6272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" HandleID="k8s-pod-network.e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.297 [INFO][6272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" HandleID="k8s-pod-network.e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.299 [INFO][6272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:20.305133 containerd[1575]: 2025-07-14 22:31:20.302 [INFO][6237] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:31:20.306047 containerd[1575]: time="2025-07-14T22:31:20.305171634Z" level=info msg="TearDown network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\" successfully" Jul 14 22:31:20.306047 containerd[1575]: time="2025-07-14T22:31:20.305207333Z" level=info msg="StopPodSandbox for \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\" returns successfully" Jul 14 22:31:20.306047 containerd[1575]: time="2025-07-14T22:31:20.305781409Z" level=info msg="RemovePodSandbox for \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\"" Jul 14 22:31:20.306047 containerd[1575]: time="2025-07-14T22:31:20.305824110Z" level=info msg="Forcibly stopping sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\"" Jul 14 22:31:20.485621 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:20.485678 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:20.487380 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.395 [WARNING][6289] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0", GenerateName:"calico-kube-controllers-64d74cf67c-", Namespace:"calico-system", SelfLink:"", UID:"e3c48e21-ba3c-4349-bbfa-eab840c18864", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2025, time.July, 14, 22, 29, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64d74cf67c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b1dc00471613df5122944f52116f0e1ad8691036a0bac2fd9d745998b1ace5f8", Pod:"calico-kube-controllers-64d74cf67c-tmxjm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc9acb5be9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.395 [INFO][6289] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.395 [INFO][6289] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" iface="eth0" netns="" Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.395 [INFO][6289] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.395 [INFO][6289] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.417 [INFO][6299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" HandleID="k8s-pod-network.e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.417 [INFO][6299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.417 [INFO][6299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.698 [WARNING][6299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" HandleID="k8s-pod-network.e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.698 [INFO][6299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" HandleID="k8s-pod-network.e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Workload="localhost-k8s-calico--kube--controllers--64d74cf67c--tmxjm-eth0" Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.699 [INFO][6299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:20.705853 containerd[1575]: 2025-07-14 22:31:20.702 [INFO][6289] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04" Jul 14 22:31:20.705853 containerd[1575]: time="2025-07-14T22:31:20.705830680Z" level=info msg="TearDown network for sandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\" successfully" Jul 14 22:31:20.950253 containerd[1575]: time="2025-07-14T22:31:20.950180533Z" level=info msg="StartContainer for \"702beb8fe6d9e903ed35e8e134d3617598827c977d9ac757fe2f220b53494586\" returns successfully" Jul 14 22:31:21.490783 containerd[1575]: time="2025-07-14T22:31:21.490670730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:31:21.491513 containerd[1575]: time="2025-07-14T22:31:21.490801379Z" level=info msg="RemovePodSandbox \"e3ed2b883cca567430d36e8242f68170422e27a92ff174fff752f91ce621af04\" returns successfully" Jul 14 22:31:22.384164 kubelet[2933]: I0714 22:31:22.384077 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64d74cf67c-tmxjm" podStartSLOduration=66.119083225 podStartE2EDuration="1m24.384054379s" podCreationTimestamp="2025-07-14 22:29:58 +0000 UTC" firstStartedPulling="2025-07-14 22:31:01.004421815 +0000 UTC m=+107.887670221" lastFinishedPulling="2025-07-14 22:31:19.269392949 +0000 UTC m=+126.152641375" observedRunningTime="2025-07-14 22:31:21.617079334 +0000 UTC m=+128.500327740" watchObservedRunningTime="2025-07-14 22:31:22.384054379 +0000 UTC m=+129.267302785" Jul 14 22:31:23.363634 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:39994.service - OpenSSH per-connection server daemon (10.0.0.1:39994). Jul 14 22:31:23.404468 sshd[6332]: Accepted publickey for core from 10.0.0.1 port 39994 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:23.406966 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:23.411814 systemd-logind[1548]: New session 14 of user core. Jul 14 22:31:23.424836 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 14 22:31:23.852868 sshd[6332]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:23.857903 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:39994.service: Deactivated successfully. Jul 14 22:31:23.860143 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Jul 14 22:31:23.860272 systemd[1]: session-14.scope: Deactivated successfully. Jul 14 22:31:23.861432 systemd-logind[1548]: Removed session 14. Jul 14 22:31:26.193884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3406061982.mount: Deactivated successfully. Jul 14 22:31:26.229748 containerd[1575]: time="2025-07-14T22:31:26.229663045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:26.232254 containerd[1575]: time="2025-07-14T22:31:26.231956852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 14 22:31:26.235553 containerd[1575]: time="2025-07-14T22:31:26.235490423Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:26.238701 containerd[1575]: time="2025-07-14T22:31:26.238651213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:26.239459 containerd[1575]: time="2025-07-14T22:31:26.239417115Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 6.969753188s" Jul 14 22:31:26.239459 containerd[1575]: time="2025-07-14T22:31:26.239455789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 14 22:31:26.240993 containerd[1575]: time="2025-07-14T22:31:26.240800785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 14 22:31:26.242116 containerd[1575]: time="2025-07-14T22:31:26.242082770Z" level=info msg="CreateContainer within sandbox \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 14 22:31:26.263451 containerd[1575]: time="2025-07-14T22:31:26.263380232Z" level=info msg="CreateContainer within sandbox \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa\"" Jul 14 22:31:26.264497 containerd[1575]: time="2025-07-14T22:31:26.264170119Z" level=info msg="StartContainer for \"812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa\"" Jul 14 22:31:26.350618 containerd[1575]: time="2025-07-14T22:31:26.350545407Z" level=info msg="StartContainer for \"812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa\" returns successfully" Jul 14 22:31:27.115171 containerd[1575]: time="2025-07-14T22:31:27.115071936Z" level=info msg="StopContainer for \"ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d\" with timeout 30 (s)" Jul 14 22:31:27.120530 containerd[1575]: time="2025-07-14T22:31:27.120445105Z" level=info msg="StopContainer for \"812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa\" with timeout 30 (s)" Jul 14 22:31:27.122402 containerd[1575]: time="2025-07-14T22:31:27.122332635Z" level=info msg="Stop container \"812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa\" with signal terminated" Jul 14 22:31:27.122593 containerd[1575]: time="2025-07-14T22:31:27.122407098Z" level=info msg="Stop container \"ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d\" with signal terminated" Jul 14 22:31:27.172195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa-rootfs.mount: Deactivated successfully. Jul 14 22:31:27.181907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d-rootfs.mount: Deactivated successfully. Jul 14 22:31:27.242169 kubelet[2933]: I0714 22:31:27.242073 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-944fbfff-rkn5j" podStartSLOduration=39.425872391 podStartE2EDuration="1m17.242045658s" podCreationTimestamp="2025-07-14 22:30:10 +0000 UTC" firstStartedPulling="2025-07-14 22:30:48.424493261 +0000 UTC m=+95.307741667" lastFinishedPulling="2025-07-14 22:31:26.240666528 +0000 UTC m=+133.123914934" observedRunningTime="2025-07-14 22:31:27.241814347 +0000 UTC m=+134.125062773" watchObservedRunningTime="2025-07-14 22:31:27.242045658 +0000 UTC m=+134.125294064" Jul 14 22:31:28.357503 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:28.392004 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:28.357538 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:28.868848 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:34708.service - OpenSSH per-connection server daemon (10.0.0.1:34708). Jul 14 22:31:28.987398 sshd[6436]: Accepted publickey for core from 10.0.0.1 port 34708 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:28.989415 sshd[6436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:28.994164 systemd-logind[1548]: New session 15 of user core. Jul 14 22:31:29.004821 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 14 22:31:29.101846 containerd[1575]: time="2025-07-14T22:31:29.076290111Z" level=info msg="shim disconnected" id=ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d namespace=k8s.io Jul 14 22:31:29.103111 containerd[1575]: time="2025-07-14T22:31:29.101809224Z" level=warning msg="cleaning up after shim disconnected" id=ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d namespace=k8s.io Jul 14 22:31:29.103111 containerd[1575]: time="2025-07-14T22:31:29.076307094Z" level=info msg="shim disconnected" id=812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa namespace=k8s.io Jul 14 22:31:29.103111 containerd[1575]: time="2025-07-14T22:31:29.102788210Z" level=warning msg="cleaning up after shim disconnected" id=812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa namespace=k8s.io Jul 14 22:31:29.103111 containerd[1575]: time="2025-07-14T22:31:29.102801836Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:31:29.103111 containerd[1575]: time="2025-07-14T22:31:29.102720721Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:31:29.134048 containerd[1575]: time="2025-07-14T22:31:29.133873095Z" level=warning msg="cleanup warnings time=\"2025-07-14T22:31:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 14 22:31:29.396946 containerd[1575]: time="2025-07-14T22:31:29.396807701Z" level=info msg="StopContainer for \"812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa\" returns successfully" Jul 14 22:31:29.397522 containerd[1575]: time="2025-07-14T22:31:29.397146136Z" level=info msg="StopContainer for \"ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d\" returns successfully" Jul 14 22:31:29.398183 containerd[1575]: time="2025-07-14T22:31:29.398134962Z" level=info msg="StopPodSandbox for \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\"" Jul 14 22:31:29.405425 containerd[1575]: time="2025-07-14T22:31:29.405334571Z" level=info msg="Container to stop \"ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:31:29.405425 containerd[1575]: time="2025-07-14T22:31:29.405404314Z" level=info msg="Container to stop \"812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 14 22:31:29.411099 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452-shm.mount: Deactivated successfully. Jul 14 22:31:29.440214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452-rootfs.mount: Deactivated successfully. Jul 14 22:31:30.088706 containerd[1575]: time="2025-07-14T22:31:30.088624709Z" level=info msg="shim disconnected" id=9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452 namespace=k8s.io Jul 14 22:31:30.088706 containerd[1575]: time="2025-07-14T22:31:30.088693910Z" level=warning msg="cleaning up after shim disconnected" id=9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452 namespace=k8s.io Jul 14 22:31:30.088706 containerd[1575]: time="2025-07-14T22:31:30.088703328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 14 22:31:30.405511 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:30.484695 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:30.405522 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:30.559510 kubelet[2933]: I0714 22:31:30.559428 2933 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:31:31.215676 systemd-networkd[1240]: cali990cc895aa0: Link DOWN Jul 14 22:31:31.215687 systemd-networkd[1240]: cali990cc895aa0: Lost carrier Jul 14 22:31:31.308550 kubelet[2933]: E0714 22:31:31.308378 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:31:31.310167 sshd[6436]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:31.322972 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:34712.service - OpenSSH per-connection server daemon (10.0.0.1:34712). Jul 14 22:31:31.323588 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:34708.service: Deactivated successfully. Jul 14 22:31:31.332455 systemd[1]: session-15.scope: Deactivated successfully. Jul 14 22:31:31.335976 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Jul 14 22:31:31.338643 systemd-logind[1548]: Removed session 15. Jul 14 22:31:31.366635 sshd[6586]: Accepted publickey for core from 10.0.0.1 port 34712 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:31.368277 sshd[6586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:31.372585 systemd-logind[1548]: New session 16 of user core. Jul 14 22:31:31.378701 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 14 22:31:32.264938 sshd[6586]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:32.274670 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:34720.service - OpenSSH per-connection server daemon (10.0.0.1:34720). Jul 14 22:31:32.275280 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:34712.service: Deactivated successfully. Jul 14 22:31:32.277755 systemd[1]: session-16.scope: Deactivated successfully. Jul 14 22:31:32.279697 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Jul 14 22:31:32.281009 systemd-logind[1548]: Removed session 16. Jul 14 22:31:32.311012 sshd[6610]: Accepted publickey for core from 10.0.0.1 port 34720 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:32.312870 sshd[6610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:32.317966 systemd-logind[1548]: New session 17 of user core. Jul 14 22:31:32.325737 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:31.213 [INFO][6570] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:31.214 [INFO][6570] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" iface="eth0" netns="/var/run/netns/cni-7df236d7-56f2-1e2b-7d80-ac475bcbb92c" Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:31.214 [INFO][6570] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" iface="eth0" netns="/var/run/netns/cni-7df236d7-56f2-1e2b-7d80-ac475bcbb92c" Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:31.229 [INFO][6570] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" after=15.027947ms iface="eth0" netns="/var/run/netns/cni-7df236d7-56f2-1e2b-7d80-ac475bcbb92c" Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:31.229 [INFO][6570] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:31.229 [INFO][6570] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:31.327 [INFO][6580] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:31.327 [INFO][6580] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:31.327 [INFO][6580] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:32.338 [INFO][6580] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:32.339 [INFO][6580] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:32.340 [INFO][6580] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:31:32.349442 containerd[1575]: 2025-07-14 22:31:32.345 [INFO][6570] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:31:32.350266 containerd[1575]: time="2025-07-14T22:31:32.349727694Z" level=info msg="TearDown network for sandbox \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\" successfully" Jul 14 22:31:32.350266 containerd[1575]: time="2025-07-14T22:31:32.349763122Z" level=info msg="StopPodSandbox for \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\" returns successfully" Jul 14 22:31:32.353431 systemd[1]: run-netns-cni\x2d7df236d7\x2d56f2\x2d1e2b\x2d7d80\x2dac475bcbb92c.mount: Deactivated successfully. Jul 14 22:31:32.453493 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:32.453526 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:32.455368 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:32.495930 kubelet[2933]: I0714 22:31:32.495549 2933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gnmv\" (UniqueName: \"kubernetes.io/projected/0df97cf3-d658-4a6f-aa84-b00ae717886f-kube-api-access-9gnmv\") pod \"0df97cf3-d658-4a6f-aa84-b00ae717886f\" (UID: \"0df97cf3-d658-4a6f-aa84-b00ae717886f\") " Jul 14 22:31:32.495930 kubelet[2933]: I0714 22:31:32.495629 2933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df97cf3-d658-4a6f-aa84-b00ae717886f-whisker-ca-bundle\") pod \"0df97cf3-d658-4a6f-aa84-b00ae717886f\" (UID: \"0df97cf3-d658-4a6f-aa84-b00ae717886f\") " Jul 14 22:31:32.495930 kubelet[2933]: I0714 22:31:32.495659 2933 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0df97cf3-d658-4a6f-aa84-b00ae717886f-whisker-backend-key-pair\") pod \"0df97cf3-d658-4a6f-aa84-b00ae717886f\" (UID: \"0df97cf3-d658-4a6f-aa84-b00ae717886f\") " Jul 14 22:31:32.643542 kubelet[2933]: I0714 22:31:32.640647 2933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df97cf3-d658-4a6f-aa84-b00ae717886f-kube-api-access-9gnmv" (OuterVolumeSpecName: "kube-api-access-9gnmv") pod "0df97cf3-d658-4a6f-aa84-b00ae717886f" (UID: "0df97cf3-d658-4a6f-aa84-b00ae717886f"). InnerVolumeSpecName "kube-api-access-9gnmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 14 22:31:32.643673 systemd[1]: var-lib-kubelet-pods-0df97cf3\x2dd658\x2d4a6f\x2daa84\x2db00ae717886f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9gnmv.mount: Deactivated successfully. Jul 14 22:31:32.644821 kubelet[2933]: I0714 22:31:32.644784 2933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df97cf3-d658-4a6f-aa84-b00ae717886f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0df97cf3-d658-4a6f-aa84-b00ae717886f" (UID: "0df97cf3-d658-4a6f-aa84-b00ae717886f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 14 22:31:32.647871 kubelet[2933]: I0714 22:31:32.647795 2933 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df97cf3-d658-4a6f-aa84-b00ae717886f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0df97cf3-d658-4a6f-aa84-b00ae717886f" (UID: "0df97cf3-d658-4a6f-aa84-b00ae717886f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 14 22:31:32.650656 systemd[1]: var-lib-kubelet-pods-0df97cf3\x2dd658\x2d4a6f\x2daa84\x2db00ae717886f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 14 22:31:32.697193 kubelet[2933]: I0714 22:31:32.697107 2933 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0df97cf3-d658-4a6f-aa84-b00ae717886f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 14 22:31:32.697193 kubelet[2933]: I0714 22:31:32.697164 2933 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0df97cf3-d658-4a6f-aa84-b00ae717886f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 14 22:31:32.697193 kubelet[2933]: I0714 22:31:32.697178 2933 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gnmv\" (UniqueName: \"kubernetes.io/projected/0df97cf3-d658-4a6f-aa84-b00ae717886f-kube-api-access-9gnmv\") on node \"localhost\" DevicePath \"\"" Jul 14 22:31:32.731614 sshd[6610]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:32.736480 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:34720.service: Deactivated successfully. Jul 14 22:31:32.739505 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Jul 14 22:31:32.739551 systemd[1]: session-17.scope: Deactivated successfully. Jul 14 22:31:32.742115 systemd-logind[1548]: Removed session 17. Jul 14 22:31:33.268575 kubelet[2933]: I0714 22:31:33.268535 2933 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df97cf3-d658-4a6f-aa84-b00ae717886f" path="/var/lib/kubelet/pods/0df97cf3-d658-4a6f-aa84-b00ae717886f/volumes" Jul 14 22:31:36.357552 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:36.413457 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:36.357589 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:37.035770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4050289400.mount: Deactivated successfully. Jul 14 22:31:37.750577 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:34734.service - OpenSSH per-connection server daemon (10.0.0.1:34734). Jul 14 22:31:37.790715 sshd[6639]: Accepted publickey for core from 10.0.0.1 port 34734 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:37.792532 sshd[6639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:37.796854 systemd-logind[1548]: New session 18 of user core. Jul 14 22:31:37.808626 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 14 22:31:38.406488 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:38.444463 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:38.406498 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:38.445658 sshd[6639]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:38.451791 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Jul 14 22:31:38.453426 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:34734.service: Deactivated successfully. Jul 14 22:31:38.458869 systemd[1]: session-18.scope: Deactivated successfully. Jul 14 22:31:38.460094 systemd-logind[1548]: Removed session 18. Jul 14 22:31:41.377082 containerd[1575]: time="2025-07-14T22:31:41.376994886Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:41.613956 containerd[1575]: time="2025-07-14T22:31:41.613853271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 14 22:31:41.816911 containerd[1575]: time="2025-07-14T22:31:41.816834109Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:42.035083 containerd[1575]: time="2025-07-14T22:31:42.034972460Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:42.036584 containerd[1575]: time="2025-07-14T22:31:42.036535576Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 15.795695587s" Jul 14 22:31:42.036584 containerd[1575]: time="2025-07-14T22:31:42.036583177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 14 22:31:42.038091 containerd[1575]: time="2025-07-14T22:31:42.038036212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 14 22:31:42.129321 containerd[1575]: time="2025-07-14T22:31:42.129164858Z" level=info msg="CreateContainer within sandbox \"db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 14 22:31:43.452795 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:54300.service - OpenSSH per-connection server daemon (10.0.0.1:54300). Jul 14 22:31:43.508104 sshd[6664]: Accepted publickey for core from 10.0.0.1 port 54300 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:43.510153 sshd[6664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:43.514704 systemd-logind[1548]: New session 19 of user core. Jul 14 22:31:43.524647 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 14 22:31:43.967129 sshd[6664]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:43.971248 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:54300.service: Deactivated successfully. Jul 14 22:31:43.974152 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Jul 14 22:31:43.974274 systemd[1]: session-19.scope: Deactivated successfully. Jul 14 22:31:43.975633 systemd-logind[1548]: Removed session 19. Jul 14 22:31:44.357480 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:44.357504 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:44.359372 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:45.279371 kubelet[2933]: E0714 22:31:45.279272 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:31:46.405827 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:31:46.503512 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:31:46.405834 systemd-resolved[1462]: Flushed all caches. Jul 14 22:31:46.718618 containerd[1575]: time="2025-07-14T22:31:46.718434843Z" level=info msg="CreateContainer within sandbox \"db186c871459830f98fd105c9a066db878447a79998d75b725cf17be095a6025\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"414258bb3cf0b07ae75408e3344ea4cc8915a2382d098e8407f6ead161904d59\"" Jul 14 22:31:46.719400 containerd[1575]: time="2025-07-14T22:31:46.719329996Z" level=info msg="StartContainer for \"414258bb3cf0b07ae75408e3344ea4cc8915a2382d098e8407f6ead161904d59\"" Jul 14 22:31:47.555818 containerd[1575]: time="2025-07-14T22:31:47.555761390Z" level=info msg="StartContainer for \"414258bb3cf0b07ae75408e3344ea4cc8915a2382d098e8407f6ead161904d59\" returns successfully" Jul 14 22:31:48.985591 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:60892.service - OpenSSH per-connection server daemon (10.0.0.1:60892). Jul 14 22:31:49.094001 sshd[6763]: Accepted publickey for core from 10.0.0.1 port 60892 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:49.096785 sshd[6763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:49.104550 systemd-logind[1548]: New session 20 of user core. Jul 14 22:31:49.109883 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 14 22:31:49.263008 sshd[6763]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:49.268289 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:60892.service: Deactivated successfully. Jul 14 22:31:49.271313 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Jul 14 22:31:49.271457 systemd[1]: session-20.scope: Deactivated successfully. Jul 14 22:31:49.272656 systemd-logind[1548]: Removed session 20. Jul 14 22:31:50.166009 containerd[1575]: time="2025-07-14T22:31:50.165919496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:50.167484 containerd[1575]: time="2025-07-14T22:31:50.167436531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 14 22:31:50.168886 containerd[1575]: time="2025-07-14T22:31:50.168767883Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:50.171175 containerd[1575]: time="2025-07-14T22:31:50.171136387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:50.172009 containerd[1575]: time="2025-07-14T22:31:50.171969701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 8.13388629s" Jul 14 22:31:50.172058 containerd[1575]: time="2025-07-14T22:31:50.172007393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 14 22:31:50.175404 containerd[1575]: time="2025-07-14T22:31:50.175376428Z" level=info msg="CreateContainer within sandbox \"41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 14 22:31:50.200311 containerd[1575]: time="2025-07-14T22:31:50.200248993Z" level=info msg="CreateContainer within sandbox \"41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"147eedd67d7064531985b3efb5f018af713f6b93b923503e9c7e42f0ea9c2605\"" Jul 14 22:31:50.200977 containerd[1575]: time="2025-07-14T22:31:50.200938614Z" level=info msg="StartContainer for \"147eedd67d7064531985b3efb5f018af713f6b93b923503e9c7e42f0ea9c2605\"" Jul 14 22:31:50.445269 containerd[1575]: time="2025-07-14T22:31:50.445113212Z" level=info msg="StartContainer for \"147eedd67d7064531985b3efb5f018af713f6b93b923503e9c7e42f0ea9c2605\" returns successfully" Jul 14 22:31:50.446553 containerd[1575]: time="2025-07-14T22:31:50.446526259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 14 22:31:52.702303 containerd[1575]: time="2025-07-14T22:31:52.702233342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:52.703389 containerd[1575]: time="2025-07-14T22:31:52.703327673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 14 22:31:52.705282 containerd[1575]: time="2025-07-14T22:31:52.705249015Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:52.708080 containerd[1575]: time="2025-07-14T22:31:52.708025343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 14 22:31:52.708878 containerd[1575]: time="2025-07-14T22:31:52.708847536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.262284728s" Jul 14 22:31:52.708937 containerd[1575]: time="2025-07-14T22:31:52.708886069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 14 22:31:52.711575 containerd[1575]: time="2025-07-14T22:31:52.711536428Z" level=info msg="CreateContainer within sandbox \"41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 14 22:31:52.732475 containerd[1575]: time="2025-07-14T22:31:52.732400140Z" level=info msg="CreateContainer within sandbox \"41dccc0a6f40aa926bd473966c2d29bf540aefbe8cc98f84ddb127574bf3c414\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"03dbec1b8eb80b8777fd686bd72c1b5e0c01d679772f778a2d1fba10972aaae9\"" Jul 14 22:31:52.733064 containerd[1575]: time="2025-07-14T22:31:52.733035027Z" level=info msg="StartContainer for \"03dbec1b8eb80b8777fd686bd72c1b5e0c01d679772f778a2d1fba10972aaae9\"" Jul 14 22:31:52.801806 containerd[1575]: time="2025-07-14T22:31:52.801755406Z" level=info msg="StartContainer for \"03dbec1b8eb80b8777fd686bd72c1b5e0c01d679772f778a2d1fba10972aaae9\" returns successfully" Jul 14 22:31:53.486602 kubelet[2933]: I0714 22:31:53.486520 2933 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 14 22:31:53.486602 kubelet[2933]: I0714 22:31:53.486604 2933 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 14 22:31:53.603828 kubelet[2933]: I0714 22:31:53.603745 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-5b429" podStartSLOduration=74.826945572 podStartE2EDuration="1m55.603720729s" podCreationTimestamp="2025-07-14 22:29:58 +0000 UTC" firstStartedPulling="2025-07-14 22:31:01.260976133 +0000 UTC m=+108.144224539" lastFinishedPulling="2025-07-14 22:31:42.03775128 +0000 UTC m=+148.920999696" observedRunningTime="2025-07-14 22:31:48.558083419 +0000 UTC m=+155.441331825" watchObservedRunningTime="2025-07-14 22:31:53.603720729 +0000 UTC m=+160.486969145" Jul 14 22:31:54.273640 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:60896.service - OpenSSH per-connection server daemon (10.0.0.1:60896). Jul 14 22:31:54.312610 sshd[6866]: Accepted publickey for core from 10.0.0.1 port 60896 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:54.314526 sshd[6866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:54.319414 systemd-logind[1548]: New session 21 of user core. Jul 14 22:31:54.329630 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 14 22:31:54.634682 sshd[6866]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:54.639565 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:60896.service: Deactivated successfully. Jul 14 22:31:54.642415 systemd[1]: session-21.scope: Deactivated successfully. Jul 14 22:31:54.642564 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Jul 14 22:31:54.643643 systemd-logind[1548]: Removed session 21. Jul 14 22:31:59.214490 systemd[1]: run-containerd-runc-k8s.io-702beb8fe6d9e903ed35e8e134d3617598827c977d9ac757fe2f220b53494586-runc.AzMANS.mount: Deactivated successfully. Jul 14 22:31:59.658635 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:34988.service - OpenSSH per-connection server daemon (10.0.0.1:34988). Jul 14 22:31:59.698290 sshd[6949]: Accepted publickey for core from 10.0.0.1 port 34988 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:31:59.700143 sshd[6949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:31:59.704388 systemd-logind[1548]: New session 22 of user core. Jul 14 22:31:59.710653 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 14 22:31:59.896871 sshd[6949]: pam_unix(sshd:session): session closed for user core Jul 14 22:31:59.900672 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:34988.service: Deactivated successfully. Jul 14 22:31:59.902939 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Jul 14 22:31:59.903023 systemd[1]: session-22.scope: Deactivated successfully. Jul 14 22:31:59.904178 systemd-logind[1548]: Removed session 22. Jul 14 22:32:00.265688 kubelet[2933]: E0714 22:32:00.265641 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:32:01.265807 kubelet[2933]: E0714 22:32:01.265715 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:32:04.918674 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:35002.service - OpenSSH per-connection server daemon (10.0.0.1:35002). Jul 14 22:32:04.957569 sshd[6965]: Accepted publickey for core from 10.0.0.1 port 35002 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:32:04.959519 sshd[6965]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:32:04.963900 systemd-logind[1548]: New session 23 of user core. Jul 14 22:32:04.974878 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 14 22:32:05.378127 sshd[6965]: pam_unix(sshd:session): session closed for user core Jul 14 22:32:05.386828 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:35002.service: Deactivated successfully. Jul 14 22:32:05.389824 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Jul 14 22:32:05.389893 systemd[1]: session-23.scope: Deactivated successfully. Jul 14 22:32:05.391587 systemd-logind[1548]: Removed session 23. Jul 14 22:32:06.265616 kubelet[2933]: E0714 22:32:06.265565 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:32:09.265998 kubelet[2933]: E0714 22:32:09.265866 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:32:10.397580 systemd[1]: Started sshd@23-10.0.0.138:22-10.0.0.1:54230.service - OpenSSH per-connection server daemon (10.0.0.1:54230). Jul 14 22:32:10.432661 sshd[6986]: Accepted publickey for core from 10.0.0.1 port 54230 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:32:10.434641 sshd[6986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:32:10.439409 systemd-logind[1548]: New session 24 of user core. Jul 14 22:32:10.447725 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 14 22:32:10.602720 sshd[6986]: pam_unix(sshd:session): session closed for user core Jul 14 22:32:10.606640 systemd[1]: sshd@23-10.0.0.138:22-10.0.0.1:54230.service: Deactivated successfully. Jul 14 22:32:10.609695 systemd-logind[1548]: Session 24 logged out. Waiting for processes to exit. Jul 14 22:32:10.609776 systemd[1]: session-24.scope: Deactivated successfully. Jul 14 22:32:10.610753 systemd-logind[1548]: Removed session 24. Jul 14 22:32:15.617624 systemd[1]: Started sshd@24-10.0.0.138:22-10.0.0.1:54238.service - OpenSSH per-connection server daemon (10.0.0.1:54238). Jul 14 22:32:15.652496 sshd[7005]: Accepted publickey for core from 10.0.0.1 port 54238 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:32:15.654188 sshd[7005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:32:15.658471 systemd-logind[1548]: New session 25 of user core. Jul 14 22:32:15.670629 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 14 22:32:15.831934 sshd[7005]: pam_unix(sshd:session): session closed for user core Jul 14 22:32:15.836813 systemd[1]: sshd@24-10.0.0.138:22-10.0.0.1:54238.service: Deactivated successfully. Jul 14 22:32:15.839583 systemd-logind[1548]: Session 25 logged out. Waiting for processes to exit. Jul 14 22:32:15.839668 systemd[1]: session-25.scope: Deactivated successfully. Jul 14 22:32:15.840925 systemd-logind[1548]: Removed session 25. Jul 14 22:32:16.266426 kubelet[2933]: E0714 22:32:16.266328 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:32:20.844829 systemd[1]: Started sshd@25-10.0.0.138:22-10.0.0.1:60662.service - OpenSSH per-connection server daemon (10.0.0.1:60662). Jul 14 22:32:20.941852 sshd[7020]: Accepted publickey for core from 10.0.0.1 port 60662 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:32:20.943998 sshd[7020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:32:20.948683 systemd-logind[1548]: New session 26 of user core. Jul 14 22:32:20.957674 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 14 22:32:21.380673 sshd[7020]: pam_unix(sshd:session): session closed for user core Jul 14 22:32:21.385358 systemd[1]: sshd@25-10.0.0.138:22-10.0.0.1:60662.service: Deactivated successfully. Jul 14 22:32:21.390283 systemd[1]: session-26.scope: Deactivated successfully. Jul 14 22:32:21.391226 systemd-logind[1548]: Session 26 logged out. Waiting for processes to exit. Jul 14 22:32:21.392485 systemd-logind[1548]: Removed session 26. Jul 14 22:32:21.416562 systemd[1]: Started sshd@26-10.0.0.138:22-10.0.0.1:60666.service - OpenSSH per-connection server daemon (10.0.0.1:60666). Jul 14 22:32:21.455414 sshd[7036]: Accepted publickey for core from 10.0.0.1 port 60666 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:32:21.457274 sshd[7036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:32:21.462434 systemd-logind[1548]: New session 27 of user core. Jul 14 22:32:21.470646 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 14 22:32:21.556362 kubelet[2933]: I0714 22:32:21.556276 2933 scope.go:117] "RemoveContainer" containerID="812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa" Jul 14 22:32:21.558020 containerd[1575]: time="2025-07-14T22:32:21.557980402Z" level=info msg="RemoveContainer for \"812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa\"" Jul 14 22:32:21.999447 containerd[1575]: time="2025-07-14T22:32:21.999385384Z" level=info msg="RemoveContainer for \"812ae08d38660e044db7405a389149d014ab8eef667158f445f7bb5d6f0602fa\" returns successfully" Jul 14 22:32:21.999833 kubelet[2933]: I0714 22:32:21.999797 2933 scope.go:117] "RemoveContainer" containerID="ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d" Jul 14 22:32:22.001209 containerd[1575]: time="2025-07-14T22:32:22.001166481Z" level=info msg="RemoveContainer for \"ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d\"" Jul 14 22:32:22.195753 containerd[1575]: time="2025-07-14T22:32:22.195681788Z" level=info msg="RemoveContainer for \"ab75dae61e8902a3e57ba60a57dc0195e8dcb39c434201b30a4ceb5a54edf38d\" returns successfully" Jul 14 22:32:22.198126 containerd[1575]: time="2025-07-14T22:32:22.197616216Z" level=info msg="StopPodSandbox for \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\"" Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.151 [WARNING][7055] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" WorkloadEndpoint="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.163 [INFO][7055] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.163 [INFO][7055] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" iface="eth0" netns="" Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.163 [INFO][7055] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.163 [INFO][7055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.623 [INFO][7064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.635 [INFO][7064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.635 [INFO][7064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.650 [WARNING][7064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.650 [INFO][7064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.651 [INFO][7064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:32:23.659368 containerd[1575]: 2025-07-14 22:32:23.655 [INFO][7055] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:32:23.660271 containerd[1575]: time="2025-07-14T22:32:23.659437822Z" level=info msg="TearDown network for sandbox \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\" successfully" Jul 14 22:32:23.660271 containerd[1575]: time="2025-07-14T22:32:23.659508717Z" level=info msg="StopPodSandbox for \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\" returns successfully" Jul 14 22:32:23.660271 containerd[1575]: time="2025-07-14T22:32:23.660184889Z" level=info msg="RemovePodSandbox for \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\"" Jul 14 22:32:23.660271 containerd[1575]: time="2025-07-14T22:32:23.660213614Z" level=info msg="Forcibly stopping sandbox \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\"" Jul 14 22:32:24.421732 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:32:24.428657 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:32:24.421783 systemd-resolved[1462]: Flushed all caches. Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:23.734 [WARNING][7082] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" WorkloadEndpoint="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:23.734 [INFO][7082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:23.734 [INFO][7082] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" iface="eth0" netns="" Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:23.734 [INFO][7082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:23.734 [INFO][7082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:23.762 [INFO][7091] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:23.763 [INFO][7091] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:23.763 [INFO][7091] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:24.434 [WARNING][7091] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:24.434 [INFO][7091] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" HandleID="k8s-pod-network.9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Workload="localhost-k8s-whisker--944fbfff--rkn5j-eth0" Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:24.436 [INFO][7091] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 14 22:32:24.444802 containerd[1575]: 2025-07-14 22:32:24.441 [INFO][7082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452" Jul 14 22:32:24.445265 containerd[1575]: time="2025-07-14T22:32:24.444875295Z" level=info msg="TearDown network for sandbox \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\" successfully" Jul 14 22:32:24.578978 containerd[1575]: time="2025-07-14T22:32:24.578871765Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 14 22:32:24.579181 containerd[1575]: time="2025-07-14T22:32:24.578998224Z" level=info msg="RemovePodSandbox \"9d9db5668ad9277e113191bb75d6c7f5b63edd080173c686dc1bd0c530f53452\" returns successfully" Jul 14 22:32:27.925221 sshd[7036]: pam_unix(sshd:session): session closed for user core Jul 14 22:32:27.933831 systemd[1]: Started sshd@27-10.0.0.138:22-10.0.0.1:60678.service - OpenSSH per-connection server daemon (10.0.0.1:60678). Jul 14 22:32:27.934758 systemd[1]: sshd@26-10.0.0.138:22-10.0.0.1:60666.service: Deactivated successfully. Jul 14 22:32:27.939614 systemd-logind[1548]: Session 27 logged out. Waiting for processes to exit. Jul 14 22:32:27.942125 systemd[1]: session-27.scope: Deactivated successfully. Jul 14 22:32:27.945508 systemd-logind[1548]: Removed session 27. Jul 14 22:32:28.145122 sshd[7120]: Accepted publickey for core from 10.0.0.1 port 60678 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:32:28.146933 sshd[7120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:32:28.151823 systemd-logind[1548]: New session 28 of user core. Jul 14 22:32:28.163654 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 14 22:32:29.869317 kubelet[2933]: I0714 22:32:29.869215 2933 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kbjjj" podStartSLOduration=100.534241691 podStartE2EDuration="2m31.869186964s" podCreationTimestamp="2025-07-14 22:29:58 +0000 UTC" firstStartedPulling="2025-07-14 22:31:01.374945948 +0000 UTC m=+108.258194364" lastFinishedPulling="2025-07-14 22:31:52.709891231 +0000 UTC m=+159.593139637" observedRunningTime="2025-07-14 22:31:53.604302725 +0000 UTC m=+160.487551131" watchObservedRunningTime="2025-07-14 22:32:29.869186964 +0000 UTC m=+196.752435370" Jul 14 22:32:35.521620 sshd[7120]: pam_unix(sshd:session): session closed for user core Jul 14 22:32:35.529717 systemd[1]: Started sshd@28-10.0.0.138:22-10.0.0.1:40434.service - OpenSSH per-connection server daemon (10.0.0.1:40434). Jul 14 22:32:35.530759 systemd[1]: sshd@27-10.0.0.138:22-10.0.0.1:60678.service: Deactivated successfully. Jul 14 22:32:35.534805 systemd[1]: session-28.scope: Deactivated successfully. Jul 14 22:32:35.536374 systemd-logind[1548]: Session 28 logged out. Waiting for processes to exit. Jul 14 22:32:35.537537 systemd-logind[1548]: Removed session 28. Jul 14 22:32:35.580667 sshd[7263]: Accepted publickey for core from 10.0.0.1 port 40434 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:32:35.582734 sshd[7263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:32:35.587831 systemd-logind[1548]: New session 29 of user core. Jul 14 22:32:35.596899 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 14 22:32:36.391682 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:32:36.397417 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:32:36.391690 systemd-resolved[1462]: Flushed all caches. Jul 14 22:32:38.437464 systemd-resolved[1462]: Under memory pressure, flushing caches. Jul 14 22:32:38.467650 systemd-journald[1148]: Under memory pressure, flushing caches. Jul 14 22:32:38.437472 systemd-resolved[1462]: Flushed all caches. Jul 14 22:32:39.796658 sshd[7263]: pam_unix(sshd:session): session closed for user core Jul 14 22:32:39.806596 systemd[1]: Started sshd@29-10.0.0.138:22-10.0.0.1:34238.service - OpenSSH per-connection server daemon (10.0.0.1:34238). Jul 14 22:32:39.807942 systemd[1]: sshd@28-10.0.0.138:22-10.0.0.1:40434.service: Deactivated successfully. Jul 14 22:32:39.810799 systemd[1]: session-29.scope: Deactivated successfully. Jul 14 22:32:39.811448 systemd-logind[1548]: Session 29 logged out. Waiting for processes to exit. Jul 14 22:32:39.812836 systemd-logind[1548]: Removed session 29. Jul 14 22:32:39.839451 sshd[7279]: Accepted publickey for core from 10.0.0.1 port 34238 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:32:39.841074 sshd[7279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:32:39.845151 systemd-logind[1548]: New session 30 of user core. Jul 14 22:32:39.854633 systemd[1]: Started session-30.scope - Session 30 of User core. Jul 14 22:32:40.035163 sshd[7279]: pam_unix(sshd:session): session closed for user core Jul 14 22:32:40.039240 systemd[1]: sshd@29-10.0.0.138:22-10.0.0.1:34238.service: Deactivated successfully. Jul 14 22:32:40.042100 systemd-logind[1548]: Session 30 logged out. Waiting for processes to exit. Jul 14 22:32:40.042916 systemd[1]: session-30.scope: Deactivated successfully. Jul 14 22:32:40.044103 systemd-logind[1548]: Removed session 30. Jul 14 22:32:45.048741 systemd[1]: Started sshd@30-10.0.0.138:22-10.0.0.1:34240.service - OpenSSH per-connection server daemon (10.0.0.1:34240). Jul 14 22:32:45.089642 sshd[7297]: Accepted publickey for core from 10.0.0.1 port 34240 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:32:45.091558 sshd[7297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:32:45.096992 systemd-logind[1548]: New session 31 of user core. Jul 14 22:32:45.111920 systemd[1]: Started session-31.scope - Session 31 of User core. Jul 14 22:32:45.252256 sshd[7297]: pam_unix(sshd:session): session closed for user core Jul 14 22:32:45.256585 systemd[1]: sshd@30-10.0.0.138:22-10.0.0.1:34240.service: Deactivated successfully. Jul 14 22:32:45.259307 systemd-logind[1548]: Session 31 logged out. Waiting for processes to exit. Jul 14 22:32:45.259396 systemd[1]: session-31.scope: Deactivated successfully. Jul 14 22:32:45.260893 systemd-logind[1548]: Removed session 31. Jul 14 22:32:50.262667 systemd[1]: Started sshd@31-10.0.0.138:22-10.0.0.1:33360.service - OpenSSH per-connection server daemon (10.0.0.1:33360). Jul 14 22:32:50.374994 sshd[7314]: Accepted publickey for core from 10.0.0.1 port 33360 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:32:50.377114 sshd[7314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:32:50.382047 systemd-logind[1548]: New session 32 of user core. Jul 14 22:32:50.389635 systemd[1]: Started session-32.scope - Session 32 of User core. Jul 14 22:32:50.572960 sshd[7314]: pam_unix(sshd:session): session closed for user core Jul 14 22:32:50.577681 systemd[1]: sshd@31-10.0.0.138:22-10.0.0.1:33360.service: Deactivated successfully. Jul 14 22:32:50.580619 systemd-logind[1548]: Session 32 logged out. Waiting for processes to exit. Jul 14 22:32:50.580702 systemd[1]: session-32.scope: Deactivated successfully. Jul 14 22:32:50.581971 systemd-logind[1548]: Removed session 32. Jul 14 22:32:55.582553 systemd[1]: Started sshd@32-10.0.0.138:22-10.0.0.1:33364.service - OpenSSH per-connection server daemon (10.0.0.1:33364). Jul 14 22:32:55.622102 sshd[7331]: Accepted publickey for core from 10.0.0.1 port 33364 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:32:55.624061 sshd[7331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:32:55.628689 systemd-logind[1548]: New session 33 of user core. Jul 14 22:32:55.633720 systemd[1]: Started session-33.scope - Session 33 of User core. Jul 14 22:32:55.767264 sshd[7331]: pam_unix(sshd:session): session closed for user core Jul 14 22:32:55.772119 systemd[1]: sshd@32-10.0.0.138:22-10.0.0.1:33364.service: Deactivated successfully. Jul 14 22:32:55.774624 systemd-logind[1548]: Session 33 logged out. Waiting for processes to exit. Jul 14 22:32:55.774748 systemd[1]: session-33.scope: Deactivated successfully. Jul 14 22:32:55.776331 systemd-logind[1548]: Removed session 33. Jul 14 22:32:59.266699 kubelet[2933]: E0714 22:32:59.266253 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:33:00.777695 systemd[1]: Started sshd@33-10.0.0.138:22-10.0.0.1:39220.service - OpenSSH per-connection server daemon (10.0.0.1:39220). Jul 14 22:33:00.815982 sshd[7416]: Accepted publickey for core from 10.0.0.1 port 39220 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:33:00.818486 sshd[7416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:33:00.825092 systemd-logind[1548]: New session 34 of user core. Jul 14 22:33:00.832015 systemd[1]: Started session-34.scope - Session 34 of User core. Jul 14 22:33:01.021204 sshd[7416]: pam_unix(sshd:session): session closed for user core Jul 14 22:33:01.026432 systemd[1]: sshd@33-10.0.0.138:22-10.0.0.1:39220.service: Deactivated successfully. Jul 14 22:33:01.030214 systemd[1]: session-34.scope: Deactivated successfully. Jul 14 22:33:01.031767 systemd-logind[1548]: Session 34 logged out. Waiting for processes to exit. Jul 14 22:33:01.032948 systemd-logind[1548]: Removed session 34. Jul 14 22:33:06.031698 systemd[1]: Started sshd@34-10.0.0.138:22-10.0.0.1:39222.service - OpenSSH per-connection server daemon (10.0.0.1:39222). Jul 14 22:33:06.080870 sshd[7431]: Accepted publickey for core from 10.0.0.1 port 39222 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:33:06.082932 sshd[7431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:33:06.087907 systemd-logind[1548]: New session 35 of user core. Jul 14 22:33:06.096794 systemd[1]: Started session-35.scope - Session 35 of User core. Jul 14 22:33:06.269374 kubelet[2933]: E0714 22:33:06.268034 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:33:06.330423 sshd[7431]: pam_unix(sshd:session): session closed for user core Jul 14 22:33:06.334363 systemd[1]: sshd@34-10.0.0.138:22-10.0.0.1:39222.service: Deactivated successfully. Jul 14 22:33:06.341367 systemd[1]: session-35.scope: Deactivated successfully. Jul 14 22:33:06.343963 systemd-logind[1548]: Session 35 logged out. Waiting for processes to exit. Jul 14 22:33:06.345013 systemd-logind[1548]: Removed session 35. Jul 14 22:33:10.265827 kubelet[2933]: E0714 22:33:10.265783 2933 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 14 22:33:11.341791 systemd[1]: Started sshd@35-10.0.0.138:22-10.0.0.1:36686.service - OpenSSH per-connection server daemon (10.0.0.1:36686). Jul 14 22:33:11.382379 sshd[7446]: Accepted publickey for core from 10.0.0.1 port 36686 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:33:11.387133 sshd[7446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:33:11.406504 systemd-logind[1548]: New session 36 of user core. Jul 14 22:33:11.411078 systemd[1]: Started session-36.scope - Session 36 of User core. Jul 14 22:33:11.621925 sshd[7446]: pam_unix(sshd:session): session closed for user core Jul 14 22:33:11.626692 systemd[1]: sshd@35-10.0.0.138:22-10.0.0.1:36686.service: Deactivated successfully. Jul 14 22:33:11.629923 systemd-logind[1548]: Session 36 logged out. Waiting for processes to exit. Jul 14 22:33:11.631170 systemd[1]: session-36.scope: Deactivated successfully. Jul 14 22:33:11.632687 systemd-logind[1548]: Removed session 36. Jul 14 22:33:16.636752 systemd[1]: Started sshd@36-10.0.0.138:22-10.0.0.1:36694.service - OpenSSH per-connection server daemon (10.0.0.1:36694). Jul 14 22:33:16.686553 sshd[7463]: Accepted publickey for core from 10.0.0.1 port 36694 ssh2: RSA SHA256:EyZbeQVMBuGqH6MJ47sL9mWR0Z/yJxF5rUBSwIZwxOA Jul 14 22:33:16.688359 sshd[7463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 14 22:33:16.692821 systemd-logind[1548]: New session 37 of user core. Jul 14 22:33:16.705718 systemd[1]: Started session-37.scope - Session 37 of User core. Jul 14 22:33:17.006212 sshd[7463]: pam_unix(sshd:session): session closed for user core Jul 14 22:33:17.011248 systemd[1]: sshd@36-10.0.0.138:22-10.0.0.1:36694.service: Deactivated successfully. Jul 14 22:33:17.014137 systemd[1]: session-37.scope: Deactivated successfully. Jul 14 22:33:17.014947 systemd-logind[1548]: Session 37 logged out. Waiting for processes to exit. Jul 14 22:33:17.016020 systemd-logind[1548]: Removed session 37.