Jul 7 00:00:12.898079 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 00:00:12.898110 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:00:12.898121 kernel: BIOS-provided physical RAM map: Jul 7 00:00:12.898127 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 7 00:00:12.898133 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 7 00:00:12.898140 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 7 00:00:12.898147 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 7 00:00:12.898153 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 7 00:00:12.898159 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 7 00:00:12.898166 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 7 00:00:12.898174 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 7 00:00:12.898181 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 7 00:00:12.898187 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 7 00:00:12.898194 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 7 00:00:12.898209 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 7 00:00:12.898216 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 7 00:00:12.898225 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 7 00:00:12.898232 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 7 00:00:12.898238 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 7 00:00:12.898245 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 00:00:12.898251 kernel: NX (Execute Disable) protection: active Jul 7 00:00:12.898258 kernel: APIC: Static calls initialized Jul 7 00:00:12.898265 kernel: efi: EFI v2.7 by EDK II Jul 7 00:00:12.898271 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jul 7 00:00:12.898278 kernel: SMBIOS 2.8 present. Jul 7 00:00:12.898285 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 7 00:00:12.898291 kernel: Hypervisor detected: KVM Jul 7 00:00:12.898319 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 00:00:12.898326 kernel: kvm-clock: using sched offset of 3838303172 cycles Jul 7 00:00:12.898333 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 00:00:12.898340 kernel: tsc: Detected 2794.750 MHz processor Jul 7 00:00:12.898347 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 00:00:12.898355 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 00:00:12.898361 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 7 00:00:12.898368 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 00:00:12.898375 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 00:00:12.898385 kernel: Using GB pages for direct mapping Jul 7 00:00:12.898392 kernel: Secure boot disabled Jul 7 00:00:12.898399 kernel: ACPI: Early table checksum verification disabled Jul 7 00:00:12.898406 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 7 00:00:12.898417 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 7 00:00:12.898424 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:00:12.898431 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:00:12.898441 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 7 00:00:12.898448 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:00:12.898455 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:00:12.898462 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:00:12.898470 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 00:00:12.898477 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 7 00:00:12.898484 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 7 00:00:12.898493 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 7 00:00:12.898500 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 7 00:00:12.898507 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 7 00:00:12.898514 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 7 00:00:12.898521 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 7 00:00:12.898528 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 7 00:00:12.898536 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 7 00:00:12.898543 kernel: No NUMA configuration found Jul 7 00:00:12.898550 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 7 00:00:12.898559 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 7 00:00:12.898566 kernel: Zone ranges: Jul 7 00:00:12.898574 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 00:00:12.898581 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 7 00:00:12.898588 kernel: Normal empty Jul 7 00:00:12.898595 kernel: Movable zone start for each node Jul 7 00:00:12.898602 kernel: Early memory node ranges Jul 7 00:00:12.898609 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 7 00:00:12.898616 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 7 00:00:12.898623 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 7 00:00:12.898633 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 7 00:00:12.898640 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 7 00:00:12.898647 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 7 00:00:12.898654 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 7 00:00:12.898661 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:00:12.898669 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 7 00:00:12.898676 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 7 00:00:12.898683 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 00:00:12.898690 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 7 00:00:12.898700 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 7 00:00:12.898707 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 7 00:00:12.898714 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 00:00:12.898721 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 00:00:12.898728 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 00:00:12.898735 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 00:00:12.898742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 00:00:12.898749 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 00:00:12.898756 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 00:00:12.898766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 00:00:12.898773 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 00:00:12.898780 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 00:00:12.898787 kernel: TSC deadline timer available Jul 7 00:00:12.898794 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 7 00:00:12.898802 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 00:00:12.898809 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 00:00:12.898818 kernel: kvm-guest: setup PV sched yield Jul 7 00:00:12.898825 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 7 00:00:12.898834 kernel: Booting paravirtualized kernel on KVM Jul 7 00:00:12.898845 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 00:00:12.898852 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 7 00:00:12.898859 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 7 00:00:12.898866 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 7 00:00:12.898873 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 7 00:00:12.898880 kernel: kvm-guest: PV spinlocks enabled Jul 7 00:00:12.898887 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 00:00:12.898896 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:00:12.898906 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 00:00:12.898913 kernel: random: crng init done Jul 7 00:00:12.898920 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 00:00:12.898928 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 00:00:12.898935 kernel: Fallback order for Node 0: 0 Jul 7 00:00:12.898942 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 7 00:00:12.898949 kernel: Policy zone: DMA32 Jul 7 00:00:12.898956 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 00:00:12.898964 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 171124K reserved, 0K cma-reserved) Jul 7 00:00:12.898974 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 00:00:12.898981 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 00:00:12.898988 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 00:00:12.898995 kernel: Dynamic Preempt: voluntary Jul 7 00:00:12.899011 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 00:00:12.899021 kernel: rcu: RCU event tracing is enabled. Jul 7 00:00:12.899028 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 00:00:12.899036 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 00:00:12.899044 kernel: Rude variant of Tasks RCU enabled. Jul 7 00:00:12.899051 kernel: Tracing variant of Tasks RCU enabled. Jul 7 00:00:12.899058 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 00:00:12.899066 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 00:00:12.899076 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 7 00:00:12.899083 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 00:00:12.899091 kernel: Console: colour dummy device 80x25 Jul 7 00:00:12.899106 kernel: printk: console [ttyS0] enabled Jul 7 00:00:12.899114 kernel: ACPI: Core revision 20230628 Jul 7 00:00:12.899125 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 00:00:12.899133 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 00:00:12.899140 kernel: x2apic enabled Jul 7 00:00:12.899148 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 00:00:12.899156 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 00:00:12.899163 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 00:00:12.899171 kernel: kvm-guest: setup PV IPIs Jul 7 00:00:12.899178 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 00:00:12.899186 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 7 00:00:12.899196 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 7 00:00:12.899204 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 00:00:12.899212 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 00:00:12.899219 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 00:00:12.899227 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 00:00:12.899235 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 00:00:12.899242 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 00:00:12.899250 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 00:00:12.899258 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 00:00:12.899268 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 00:00:12.899276 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 00:00:12.899284 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 00:00:12.899292 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 00:00:12.899354 kernel: x86/bugs: return thunk changed Jul 7 00:00:12.899363 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 00:00:12.899371 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 00:00:12.899389 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 00:00:12.899416 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 00:00:12.899424 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 00:00:12.899432 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 00:00:12.899439 kernel: Freeing SMP alternatives memory: 32K Jul 7 00:00:12.899447 kernel: pid_max: default: 32768 minimum: 301 Jul 7 00:00:12.899455 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 00:00:12.899462 kernel: landlock: Up and running. Jul 7 00:00:12.899470 kernel: SELinux: Initializing. Jul 7 00:00:12.899477 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:00:12.899488 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 00:00:12.899496 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 00:00:12.899504 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:00:12.899511 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:00:12.899519 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 00:00:12.899526 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 00:00:12.899534 kernel: ... version: 0 Jul 7 00:00:12.899541 kernel: ... bit width: 48 Jul 7 00:00:12.899548 kernel: ... generic registers: 6 Jul 7 00:00:12.899559 kernel: ... value mask: 0000ffffffffffff Jul 7 00:00:12.899566 kernel: ... max period: 00007fffffffffff Jul 7 00:00:12.899574 kernel: ... fixed-purpose events: 0 Jul 7 00:00:12.899581 kernel: ... event mask: 000000000000003f Jul 7 00:00:12.899588 kernel: signal: max sigframe size: 1776 Jul 7 00:00:12.899596 kernel: rcu: Hierarchical SRCU implementation. Jul 7 00:00:12.899604 kernel: rcu: Max phase no-delay instances is 400. Jul 7 00:00:12.899611 kernel: smp: Bringing up secondary CPUs ... Jul 7 00:00:12.899619 kernel: smpboot: x86: Booting SMP configuration: Jul 7 00:00:12.899629 kernel: .... node #0, CPUs: #1 #2 #3 Jul 7 00:00:12.899636 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 00:00:12.899644 kernel: smpboot: Max logical packages: 1 Jul 7 00:00:12.899651 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 7 00:00:12.899659 kernel: devtmpfs: initialized Jul 7 00:00:12.899666 kernel: x86/mm: Memory block size: 128MB Jul 7 00:00:12.899674 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 7 00:00:12.899681 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 7 00:00:12.899689 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 7 00:00:12.899699 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 7 00:00:12.899707 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 7 00:00:12.899714 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 00:00:12.899722 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 00:00:12.899729 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 00:00:12.899737 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 00:00:12.899744 kernel: audit: initializing netlink subsys (disabled) Jul 7 00:00:12.899752 kernel: audit: type=2000 audit(1751846412.326:1): state=initialized audit_enabled=0 res=1 Jul 7 00:00:12.899759 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 00:00:12.899769 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 00:00:12.899777 kernel: cpuidle: using governor menu Jul 7 00:00:12.899784 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 00:00:12.899791 kernel: dca service started, version 1.12.1 Jul 7 00:00:12.899799 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 7 00:00:12.899806 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 00:00:12.899814 kernel: PCI: Using configuration type 1 for base access Jul 7 00:00:12.899824 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 00:00:12.899832 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 00:00:12.899844 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 00:00:12.899851 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 00:00:12.899859 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 00:00:12.899866 kernel: ACPI: Added _OSI(Module Device) Jul 7 00:00:12.899874 kernel: ACPI: Added _OSI(Processor Device) Jul 7 00:00:12.899881 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 00:00:12.899889 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 00:00:12.899896 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 7 00:00:12.899903 kernel: ACPI: Interpreter enabled Jul 7 00:00:12.899913 kernel: ACPI: PM: (supports S0 S3 S5) Jul 7 00:00:12.899920 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 00:00:12.899928 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 00:00:12.899935 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 00:00:12.899943 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 00:00:12.899950 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 00:00:12.900155 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 00:00:12.900284 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 00:00:12.900426 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 00:00:12.900437 kernel: PCI host bridge to bus 0000:00 Jul 7 00:00:12.900562 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 00:00:12.900673 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 00:00:12.900784 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 00:00:12.900900 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 7 00:00:12.901011 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 00:00:12.901136 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 7 00:00:12.901248 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 00:00:12.901403 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 7 00:00:12.901537 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 7 00:00:12.901659 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 7 00:00:12.901779 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 7 00:00:12.901905 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 7 00:00:12.902024 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 7 00:00:12.902153 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 00:00:12.902285 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 7 00:00:12.903488 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 7 00:00:12.903615 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 7 00:00:12.903736 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 7 00:00:12.903879 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 7 00:00:12.904004 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 7 00:00:12.904137 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 7 00:00:12.904261 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 7 00:00:12.904412 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 7 00:00:12.904536 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 7 00:00:12.904662 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 7 00:00:12.904782 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 7 00:00:12.904902 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 7 00:00:12.905032 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 7 00:00:12.905165 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 00:00:12.905293 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 7 00:00:12.905438 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 7 00:00:12.905566 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 7 00:00:12.906890 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 7 00:00:12.907012 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 7 00:00:12.907022 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 00:00:12.907031 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 00:00:12.907039 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 00:00:12.907046 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 00:00:12.907054 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 00:00:12.907066 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 00:00:12.907073 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 00:00:12.907081 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 00:00:12.907088 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 00:00:12.907104 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 00:00:12.907111 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 00:00:12.907119 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 00:00:12.907126 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 00:00:12.907134 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 00:00:12.907144 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 00:00:12.907152 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 00:00:12.907159 kernel: iommu: Default domain type: Translated Jul 7 00:00:12.907167 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 00:00:12.907174 kernel: efivars: Registered efivars operations Jul 7 00:00:12.907182 kernel: PCI: Using ACPI for IRQ routing Jul 7 00:00:12.907190 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 00:00:12.907197 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 7 00:00:12.907204 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 7 00:00:12.907215 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 7 00:00:12.907222 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 7 00:00:12.907359 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 00:00:12.907480 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 00:00:12.907598 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 00:00:12.907608 kernel: vgaarb: loaded Jul 7 00:00:12.907616 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 00:00:12.907623 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 00:00:12.907631 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 00:00:12.907643 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 00:00:12.907651 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 00:00:12.907659 kernel: pnp: PnP ACPI init Jul 7 00:00:12.907787 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 00:00:12.907798 kernel: pnp: PnP ACPI: found 6 devices Jul 7 00:00:12.907806 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 00:00:12.907814 kernel: NET: Registered PF_INET protocol family Jul 7 00:00:12.907822 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 00:00:12.907835 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 00:00:12.907843 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 00:00:12.907852 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 00:00:12.907861 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 00:00:12.907868 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 00:00:12.907876 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:00:12.907884 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 00:00:12.907891 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 00:00:12.907899 kernel: NET: Registered PF_XDP protocol family Jul 7 00:00:12.908024 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 7 00:00:12.908155 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 7 00:00:12.908268 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 00:00:12.908432 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 00:00:12.908542 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 00:00:12.908650 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 7 00:00:12.908758 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 00:00:12.908872 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 7 00:00:12.908882 kernel: PCI: CLS 0 bytes, default 64 Jul 7 00:00:12.908890 kernel: Initialise system trusted keyrings Jul 7 00:00:12.908898 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 00:00:12.908905 kernel: Key type asymmetric registered Jul 7 00:00:12.908913 kernel: Asymmetric key parser 'x509' registered Jul 7 00:00:12.908920 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 00:00:12.908928 kernel: io scheduler mq-deadline registered Jul 7 00:00:12.908935 kernel: io scheduler kyber registered Jul 7 00:00:12.908946 kernel: io scheduler bfq registered Jul 7 00:00:12.908954 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 00:00:12.908962 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 00:00:12.908970 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 00:00:12.908977 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 00:00:12.908985 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 00:00:12.908993 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 00:00:12.909000 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 00:00:12.909008 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 00:00:12.909018 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 00:00:12.909156 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 00:00:12.909168 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 00:00:12.909281 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 00:00:12.909410 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T00:00:12 UTC (1751846412) Jul 7 00:00:12.909523 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 7 00:00:12.909533 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 00:00:12.909541 kernel: efifb: probing for efifb Jul 7 00:00:12.909553 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 7 00:00:12.909561 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 7 00:00:12.909568 kernel: efifb: scrolling: redraw Jul 7 00:00:12.909576 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 7 00:00:12.909584 kernel: Console: switching to colour frame buffer device 100x37 Jul 7 00:00:12.909591 kernel: fb0: EFI VGA frame buffer device Jul 7 00:00:12.909617 kernel: pstore: Using crash dump compression: deflate Jul 7 00:00:12.909628 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 00:00:12.909636 kernel: NET: Registered PF_INET6 protocol family Jul 7 00:00:12.909646 kernel: Segment Routing with IPv6 Jul 7 00:00:12.909653 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 00:00:12.909661 kernel: NET: Registered PF_PACKET protocol family Jul 7 00:00:12.909669 kernel: Key type dns_resolver registered Jul 7 00:00:12.909677 kernel: IPI shorthand broadcast: enabled Jul 7 00:00:12.909685 kernel: sched_clock: Marking stable (555003056, 109710637)->(716569568, -51855875) Jul 7 00:00:12.909693 kernel: registered taskstats version 1 Jul 7 00:00:12.909700 kernel: Loading compiled-in X.509 certificates Jul 7 00:00:12.909709 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 00:00:12.909719 kernel: Key type .fscrypt registered Jul 7 00:00:12.909726 kernel: Key type fscrypt-provisioning registered Jul 7 00:00:12.909734 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 00:00:12.909742 kernel: ima: Allocated hash algorithm: sha1 Jul 7 00:00:12.909750 kernel: ima: No architecture policies found Jul 7 00:00:12.909758 kernel: clk: Disabling unused clocks Jul 7 00:00:12.909766 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 00:00:12.909774 kernel: Write protecting the kernel read-only data: 36864k Jul 7 00:00:12.909782 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 00:00:12.909792 kernel: Run /init as init process Jul 7 00:00:12.909800 kernel: with arguments: Jul 7 00:00:12.909808 kernel: /init Jul 7 00:00:12.909816 kernel: with environment: Jul 7 00:00:12.909826 kernel: HOME=/ Jul 7 00:00:12.909833 kernel: TERM=linux Jul 7 00:00:12.909841 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 00:00:12.909851 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:00:12.909864 systemd[1]: Detected virtualization kvm. Jul 7 00:00:12.909873 systemd[1]: Detected architecture x86-64. Jul 7 00:00:12.909881 systemd[1]: Running in initrd. Jul 7 00:00:12.909889 systemd[1]: No hostname configured, using default hostname. Jul 7 00:00:12.909897 systemd[1]: Hostname set to . Jul 7 00:00:12.909911 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:00:12.909920 systemd[1]: Queued start job for default target initrd.target. Jul 7 00:00:12.909928 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:00:12.909937 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:00:12.909946 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 00:00:12.909955 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:00:12.909975 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 00:00:12.909983 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 00:00:12.910013 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 00:00:12.910029 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 00:00:12.910038 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:00:12.910047 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:00:12.910070 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:00:12.910079 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:00:12.910087 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:00:12.910107 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:00:12.910115 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:00:12.910124 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:00:12.910132 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 00:00:12.910141 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 00:00:12.910149 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:00:12.910158 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:00:12.910166 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:00:12.910177 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:00:12.910186 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 00:00:12.910194 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:00:12.910203 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 00:00:12.910211 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 00:00:12.910219 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:00:12.910228 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:00:12.910237 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:12.910245 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 00:00:12.910256 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:00:12.910264 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 00:00:12.910273 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 00:00:12.910282 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 00:00:12.910327 systemd-journald[193]: Collecting audit messages is disabled. Jul 7 00:00:12.910347 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:00:12.910355 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:12.910364 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:00:12.910376 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:00:12.910385 systemd-journald[193]: Journal started Jul 7 00:00:12.910403 systemd-journald[193]: Runtime Journal (/run/log/journal/450e72adb0834fb8ba11d94afb2214c6) is 6.0M, max 48.3M, 42.2M free. Jul 7 00:00:12.887249 systemd-modules-load[194]: Inserted module 'overlay' Jul 7 00:00:12.911590 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:00:12.917337 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 00:00:12.919968 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 7 00:00:12.920321 kernel: Bridge firewalling registered Jul 7 00:00:12.921455 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:00:12.921894 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:00:12.924317 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:12.927947 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 00:00:12.930772 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:00:12.941102 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:00:12.945552 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:00:12.947784 dracut-cmdline[220]: dracut-dracut-053 Jul 7 00:00:12.949283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:00:12.951491 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 00:00:13.004995 systemd-resolved[236]: Positive Trust Anchors: Jul 7 00:00:13.005010 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:00:13.005040 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:00:13.007487 systemd-resolved[236]: Defaulting to hostname 'linux'. Jul 7 00:00:13.008526 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:00:13.014923 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:00:13.053335 kernel: SCSI subsystem initialized Jul 7 00:00:13.063329 kernel: Loading iSCSI transport class v2.0-870. Jul 7 00:00:13.073328 kernel: iscsi: registered transport (tcp) Jul 7 00:00:13.099333 kernel: iscsi: registered transport (qla4xxx) Jul 7 00:00:13.099366 kernel: QLogic iSCSI HBA Driver Jul 7 00:00:13.152014 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 00:00:13.156561 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 00:00:13.184155 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 00:00:13.184248 kernel: device-mapper: uevent: version 1.0.3 Jul 7 00:00:13.184265 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 00:00:13.227347 kernel: raid6: avx2x4 gen() 30487 MB/s Jul 7 00:00:13.244350 kernel: raid6: avx2x2 gen() 31552 MB/s Jul 7 00:00:13.261366 kernel: raid6: avx2x1 gen() 26080 MB/s Jul 7 00:00:13.261430 kernel: raid6: using algorithm avx2x2 gen() 31552 MB/s Jul 7 00:00:13.279400 kernel: raid6: .... xor() 19984 MB/s, rmw enabled Jul 7 00:00:13.279475 kernel: raid6: using avx2x2 recovery algorithm Jul 7 00:00:13.299353 kernel: xor: automatically using best checksumming function avx Jul 7 00:00:13.452324 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 00:00:13.464733 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:00:13.472529 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:00:13.485433 systemd-udevd[411]: Using default interface naming scheme 'v255'. Jul 7 00:00:13.490208 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:00:13.502448 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 00:00:13.514886 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jul 7 00:00:13.545741 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:00:13.561451 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:00:13.623292 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:00:13.635503 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 00:00:13.647892 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 00:00:13.650767 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:00:13.653730 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:00:13.656053 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:00:13.666364 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 00:00:13.665496 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 00:00:13.677585 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:00:13.682321 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 7 00:00:13.682937 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:00:13.683096 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:13.684468 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:00:13.685530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:00:13.685689 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:13.691459 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:13.697322 kernel: AVX2 version of gcm_enc/dec engaged. Jul 7 00:00:13.697353 kernel: libata version 3.00 loaded. Jul 7 00:00:13.698323 kernel: AES CTR mode by8 optimization enabled Jul 7 00:00:13.702613 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 00:00:13.701797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:13.707239 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:00:13.711067 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 00:00:13.711123 kernel: GPT:9289727 != 19775487 Jul 7 00:00:13.711137 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 00:00:13.711150 kernel: GPT:9289727 != 19775487 Jul 7 00:00:13.711163 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 00:00:13.711183 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:00:13.711198 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 00:00:13.711410 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 00:00:13.707367 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:13.714853 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:13.716800 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 7 00:00:13.716963 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 00:00:13.723348 kernel: scsi host0: ahci Jul 7 00:00:13.725472 kernel: scsi host1: ahci Jul 7 00:00:13.728345 kernel: scsi host2: ahci Jul 7 00:00:13.733451 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (458) Jul 7 00:00:13.733492 kernel: scsi host3: ahci Jul 7 00:00:13.735207 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 00:00:13.735805 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (457) Jul 7 00:00:13.740366 kernel: scsi host4: ahci Jul 7 00:00:13.740330 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 00:00:13.747753 kernel: scsi host5: ahci Jul 7 00:00:13.747964 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 7 00:00:13.747977 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 7 00:00:13.747799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:13.754726 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 7 00:00:13.754749 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 7 00:00:13.754759 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 7 00:00:13.754769 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 7 00:00:13.762388 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 00:00:13.764763 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 00:00:13.772152 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:00:13.784420 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 00:00:13.787316 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 00:00:13.793036 disk-uuid[560]: Primary Header is updated. Jul 7 00:00:13.793036 disk-uuid[560]: Secondary Entries is updated. Jul 7 00:00:13.793036 disk-uuid[560]: Secondary Header is updated. Jul 7 00:00:13.796321 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:00:13.799327 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:00:13.811597 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:14.063799 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 00:00:14.063898 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 00:00:14.063910 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 00:00:14.063920 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 00:00:14.063930 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 00:00:14.065333 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 00:00:14.065358 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 00:00:14.066330 kernel: ata3.00: applying bridge limits Jul 7 00:00:14.066345 kernel: ata3.00: configured for UDMA/100 Jul 7 00:00:14.067339 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 00:00:14.114859 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 00:00:14.115075 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 00:00:14.127330 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 00:00:14.820325 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 00:00:14.820384 disk-uuid[562]: The operation has completed successfully. Jul 7 00:00:14.853351 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 00:00:14.853465 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 00:00:14.877431 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 00:00:14.882496 sh[595]: Success Jul 7 00:00:14.894335 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 7 00:00:14.925484 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 00:00:14.941672 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 00:00:14.943982 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 00:00:14.954366 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 00:00:14.954395 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:00:14.954406 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 00:00:14.955330 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 00:00:14.956601 kernel: BTRFS info (device dm-0): using free space tree Jul 7 00:00:14.960201 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 00:00:14.962279 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 00:00:14.972442 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 00:00:14.975200 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 00:00:14.982879 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:00:14.982912 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:00:14.982933 kernel: BTRFS info (device vda6): using free space tree Jul 7 00:00:14.986324 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 00:00:14.994503 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 00:00:14.996569 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:00:15.005113 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 00:00:15.021485 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 00:00:15.078755 ignition[689]: Ignition 2.19.0 Jul 7 00:00:15.078766 ignition[689]: Stage: fetch-offline Jul 7 00:00:15.078805 ignition[689]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:15.078815 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:00:15.078904 ignition[689]: parsed url from cmdline: "" Jul 7 00:00:15.078908 ignition[689]: no config URL provided Jul 7 00:00:15.078913 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 00:00:15.078921 ignition[689]: no config at "/usr/lib/ignition/user.ign" Jul 7 00:00:15.078945 ignition[689]: op(1): [started] loading QEMU firmware config module Jul 7 00:00:15.078951 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 00:00:15.085949 ignition[689]: op(1): [finished] loading QEMU firmware config module Jul 7 00:00:15.095029 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:00:15.103422 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:00:15.129127 systemd-networkd[783]: lo: Link UP Jul 7 00:00:15.129135 systemd-networkd[783]: lo: Gained carrier Jul 7 00:00:15.131944 ignition[689]: parsing config with SHA512: 800717f4646a1bc3083eee35a84e6ff3af2cdfa051f9ff07e4249fdd7e93830f9244b4d6d64a49edfa416239550dc30710692ab784b3db4252bb2c9b14beda9a Jul 7 00:00:15.132100 systemd-networkd[783]: Enumeration completed Jul 7 00:00:15.132201 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:00:15.132808 systemd[1]: Reached target network.target - Network. Jul 7 00:00:15.136925 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:15.136933 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:00:15.138861 ignition[689]: fetch-offline: fetch-offline passed Jul 7 00:00:15.138391 unknown[689]: fetched base config from "system" Jul 7 00:00:15.138953 ignition[689]: Ignition finished successfully Jul 7 00:00:15.138401 unknown[689]: fetched user config from "qemu" Jul 7 00:00:15.138534 systemd-networkd[783]: eth0: Link UP Jul 7 00:00:15.138538 systemd-networkd[783]: eth0: Gained carrier Jul 7 00:00:15.138545 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:15.142199 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:00:15.143864 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 00:00:15.154472 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 00:00:15.164371 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:00:15.170444 ignition[787]: Ignition 2.19.0 Jul 7 00:00:15.170454 ignition[787]: Stage: kargs Jul 7 00:00:15.170601 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:15.170612 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:00:15.171422 ignition[787]: kargs: kargs passed Jul 7 00:00:15.171461 ignition[787]: Ignition finished successfully Jul 7 00:00:15.177517 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 00:00:15.187439 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 00:00:15.200104 ignition[796]: Ignition 2.19.0 Jul 7 00:00:15.200114 ignition[796]: Stage: disks Jul 7 00:00:15.200272 ignition[796]: no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:15.200282 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:00:15.201228 ignition[796]: disks: disks passed Jul 7 00:00:15.201281 ignition[796]: Ignition finished successfully Jul 7 00:00:15.206174 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 00:00:15.206771 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 00:00:15.208655 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 00:00:15.210703 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:00:15.213156 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:00:15.214898 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:00:15.224444 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 00:00:15.235829 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 7 00:00:15.241910 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 00:00:15.247409 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 00:00:15.330410 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 00:00:15.330876 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 00:00:15.332194 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 00:00:15.343380 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:00:15.345250 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 00:00:15.346610 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 00:00:15.346656 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 00:00:15.357747 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Jul 7 00:00:15.357774 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:00:15.357788 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:00:15.357802 kernel: BTRFS info (device vda6): using free space tree Jul 7 00:00:15.346682 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:00:15.360961 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 00:00:15.353446 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 00:00:15.358564 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 00:00:15.362816 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:00:15.369923 systemd-resolved[236]: Detected conflict on linux IN A 10.0.0.146 Jul 7 00:00:15.369935 systemd-resolved[236]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jul 7 00:00:15.396327 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 00:00:15.401632 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jul 7 00:00:15.405083 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 00:00:15.409916 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 00:00:15.491719 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 00:00:15.499461 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 00:00:15.501015 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 00:00:15.507362 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:00:15.524914 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 00:00:15.528181 ignition[928]: INFO : Ignition 2.19.0 Jul 7 00:00:15.528181 ignition[928]: INFO : Stage: mount Jul 7 00:00:15.529871 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:15.529871 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:00:15.529871 ignition[928]: INFO : mount: mount passed Jul 7 00:00:15.529871 ignition[928]: INFO : Ignition finished successfully Jul 7 00:00:15.532544 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 00:00:15.541418 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 00:00:15.953765 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 00:00:15.969511 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 00:00:15.976329 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Jul 7 00:00:15.978918 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 00:00:15.978935 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 00:00:15.978946 kernel: BTRFS info (device vda6): using free space tree Jul 7 00:00:15.981322 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 00:00:15.982839 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 00:00:16.001497 ignition[959]: INFO : Ignition 2.19.0 Jul 7 00:00:16.001497 ignition[959]: INFO : Stage: files Jul 7 00:00:16.003082 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:16.003082 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:00:16.003082 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jul 7 00:00:16.006605 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 00:00:16.006605 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 00:00:16.006605 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 00:00:16.006605 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 00:00:16.006605 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 00:00:16.005879 unknown[959]: wrote ssh authorized keys file for user: core Jul 7 00:00:16.014179 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 00:00:16.014179 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 7 00:00:16.047568 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 00:00:16.225213 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 7 00:00:16.225213 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:00:16.228887 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 7 00:00:16.786911 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 00:00:17.023476 systemd-networkd[783]: eth0: Gained IPv6LL Jul 7 00:00:17.154504 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 7 00:00:17.154504 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 00:00:17.158314 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:00:17.158314 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 00:00:17.158314 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 00:00:17.158314 ignition[959]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 00:00:17.158314 ignition[959]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 00:00:17.158314 ignition[959]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 00:00:17.158314 ignition[959]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 00:00:17.158314 ignition[959]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 00:00:17.187220 ignition[959]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 00:00:17.192155 ignition[959]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 00:00:17.193673 ignition[959]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 00:00:17.193673 ignition[959]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 00:00:17.193673 ignition[959]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 00:00:17.193673 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:00:17.193673 ignition[959]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 00:00:17.193673 ignition[959]: INFO : files: files passed Jul 7 00:00:17.193673 ignition[959]: INFO : Ignition finished successfully Jul 7 00:00:17.204658 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 00:00:17.214519 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 00:00:17.215502 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 00:00:17.222097 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 00:00:17.222216 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 00:00:17.227887 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 00:00:17.231567 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:17.231567 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:17.234500 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 00:00:17.238011 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:00:17.238442 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 00:00:17.246414 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 00:00:17.268644 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 00:00:17.268766 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 00:00:17.269273 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 00:00:17.271999 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 00:00:17.274006 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 00:00:17.274746 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 00:00:17.291907 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:00:17.303493 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 00:00:17.312176 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:00:17.312666 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:00:17.312990 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 00:00:17.347496 ignition[1015]: INFO : Ignition 2.19.0 Jul 7 00:00:17.347496 ignition[1015]: INFO : Stage: umount Jul 7 00:00:17.347496 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 00:00:17.347496 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 00:00:17.347496 ignition[1015]: INFO : umount: umount passed Jul 7 00:00:17.347496 ignition[1015]: INFO : Ignition finished successfully Jul 7 00:00:17.313290 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 00:00:17.313414 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 00:00:17.314080 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 00:00:17.314589 systemd[1]: Stopped target basic.target - Basic System. Jul 7 00:00:17.314900 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 00:00:17.315223 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 00:00:17.315648 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 00:00:17.315964 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 00:00:17.316284 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 00:00:17.316610 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 00:00:17.316928 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 00:00:17.317249 systemd[1]: Stopped target swap.target - Swaps. Jul 7 00:00:17.317539 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 00:00:17.317644 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 00:00:17.318373 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:00:17.318691 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:00:17.318968 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 00:00:17.319076 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:00:17.319318 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 00:00:17.319422 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 00:00:17.319924 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 00:00:17.320036 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 00:00:17.320636 systemd[1]: Stopped target paths.target - Path Units. Jul 7 00:00:17.320865 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 00:00:17.324348 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:00:17.324687 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 00:00:17.324993 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 00:00:17.325323 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 00:00:17.325412 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 00:00:17.325822 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 00:00:17.325911 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 00:00:17.326280 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 00:00:17.326401 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 00:00:17.326750 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 00:00:17.326874 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 00:00:17.327899 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 00:00:17.328095 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 00:00:17.328200 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:00:17.329255 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 00:00:17.329524 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 00:00:17.329622 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:00:17.329912 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 00:00:17.330017 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 00:00:17.333740 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 00:00:17.333839 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 00:00:17.347786 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 00:00:17.347893 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 00:00:17.349512 systemd[1]: Stopped target network.target - Network. Jul 7 00:00:17.350927 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 00:00:17.350987 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 00:00:17.352845 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 00:00:17.352891 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 00:00:17.355031 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 00:00:17.355077 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 00:00:17.356713 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 00:00:17.356760 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 00:00:17.358751 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 00:00:17.360681 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 00:00:17.362344 systemd-networkd[783]: eth0: DHCPv6 lease lost Jul 7 00:00:17.363646 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 00:00:17.364150 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 00:00:17.364265 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 00:00:17.366681 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 00:00:17.366739 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:00:17.373448 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 00:00:17.375163 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 00:00:17.375243 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 00:00:17.377327 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:00:17.381278 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 00:00:17.381460 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 00:00:17.386414 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 00:00:17.386513 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:00:17.388045 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 00:00:17.388108 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 00:00:17.389285 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 00:00:17.389348 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:00:17.409418 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 00:00:17.409545 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 00:00:17.410916 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 00:00:17.411090 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:00:17.413960 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 00:00:17.414035 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 00:00:17.415322 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 00:00:17.415365 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:00:17.417215 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 00:00:17.417265 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 00:00:17.419269 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 00:00:17.419337 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 00:00:17.421001 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 00:00:17.421052 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 00:00:17.432498 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 00:00:17.433675 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 00:00:17.433739 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:00:17.435888 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 00:00:17.435949 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:17.440924 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 00:00:17.441045 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 00:00:17.584274 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 00:00:17.584426 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 00:00:17.586617 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 00:00:17.588468 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 00:00:17.588521 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 00:00:17.600417 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 00:00:17.609432 systemd[1]: Switching root. Jul 7 00:00:17.643653 systemd-journald[193]: Journal stopped Jul 7 00:00:18.955245 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 7 00:00:18.955335 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 00:00:18.955350 kernel: SELinux: policy capability open_perms=1 Jul 7 00:00:18.955366 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 00:00:18.955380 kernel: SELinux: policy capability always_check_network=0 Jul 7 00:00:18.955391 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 00:00:18.955402 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 00:00:18.955414 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 00:00:18.955425 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 00:00:18.955436 kernel: audit: type=1403 audit(1751846418.116:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 00:00:18.955453 systemd[1]: Successfully loaded SELinux policy in 43.048ms. Jul 7 00:00:18.955474 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.630ms. Jul 7 00:00:18.955486 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 00:00:18.955501 systemd[1]: Detected virtualization kvm. Jul 7 00:00:18.955513 systemd[1]: Detected architecture x86-64. Jul 7 00:00:18.955524 systemd[1]: Detected first boot. Jul 7 00:00:18.955536 systemd[1]: Initializing machine ID from VM UUID. Jul 7 00:00:18.955549 zram_generator::config[1059]: No configuration found. Jul 7 00:00:18.955562 systemd[1]: Populated /etc with preset unit settings. Jul 7 00:00:18.955574 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 00:00:18.955585 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 00:00:18.955599 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 00:00:18.955611 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 00:00:18.955623 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 00:00:18.955635 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 00:00:18.955647 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 00:00:18.955658 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 00:00:18.955670 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 00:00:18.955682 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 00:00:18.955696 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 00:00:18.955708 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 00:00:18.955720 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 00:00:18.955732 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 00:00:18.955744 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 00:00:18.955756 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 00:00:18.955768 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 00:00:18.955780 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 00:00:18.955796 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 00:00:18.955812 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 00:00:18.955824 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 00:00:18.955836 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 00:00:18.955848 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 00:00:18.955859 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 00:00:18.955871 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 00:00:18.955888 systemd[1]: Reached target slices.target - Slice Units. Jul 7 00:00:18.955900 systemd[1]: Reached target swap.target - Swaps. Jul 7 00:00:18.955915 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 00:00:18.955935 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 00:00:18.955948 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 00:00:18.955959 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 00:00:18.955971 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 00:00:18.955983 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 00:00:18.955995 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 00:00:18.956007 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 00:00:18.956018 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 00:00:18.956032 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:18.956044 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 00:00:18.956056 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 00:00:18.956068 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 00:00:18.956082 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 00:00:18.956094 systemd[1]: Reached target machines.target - Containers. Jul 7 00:00:18.956106 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 00:00:18.956118 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:00:18.956132 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 00:00:18.956144 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 00:00:18.956156 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:00:18.956169 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:00:18.956180 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:00:18.956192 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 00:00:18.956204 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:00:18.956215 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 00:00:18.956227 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 00:00:18.956242 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 00:00:18.956253 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 00:00:18.956265 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 00:00:18.956279 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 00:00:18.956291 kernel: fuse: init (API version 7.39) Jul 7 00:00:18.956372 kernel: loop: module loaded Jul 7 00:00:18.956384 kernel: ACPI: bus type drm_connector registered Jul 7 00:00:18.956414 systemd-journald[1126]: Collecting audit messages is disabled. Jul 7 00:00:18.956442 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 00:00:18.956454 systemd-journald[1126]: Journal started Jul 7 00:00:18.956475 systemd-journald[1126]: Runtime Journal (/run/log/journal/450e72adb0834fb8ba11d94afb2214c6) is 6.0M, max 48.3M, 42.2M free. Jul 7 00:00:18.613842 systemd[1]: Queued start job for default target multi-user.target. Jul 7 00:00:18.634513 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 00:00:18.634958 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 00:00:18.960316 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 00:00:18.963639 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 00:00:18.968792 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 00:00:18.970493 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 00:00:18.970522 systemd[1]: Stopped verity-setup.service. Jul 7 00:00:18.973349 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:18.976435 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 00:00:18.977864 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 00:00:18.978997 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 00:00:18.980136 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 00:00:18.981168 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 00:00:18.982289 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 00:00:18.983512 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 00:00:18.984685 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 00:00:18.986049 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 00:00:18.987540 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 00:00:18.987709 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 00:00:18.989107 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:00:18.989273 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:00:18.990672 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:00:18.990840 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:00:18.992128 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:00:18.992292 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:00:18.993903 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 00:00:18.994077 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 00:00:18.995391 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:00:18.995553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:00:18.996837 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 00:00:18.998326 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 00:00:18.999877 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 00:00:19.014752 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 00:00:19.024380 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 00:00:19.026632 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 00:00:19.027727 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 00:00:19.027757 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 00:00:19.029733 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 00:00:19.032001 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 00:00:19.036355 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 00:00:19.038298 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:00:19.039812 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 00:00:19.042772 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 00:00:19.044011 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:00:19.046432 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 00:00:19.047500 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:00:19.049474 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 00:00:19.054522 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 00:00:19.063625 systemd-journald[1126]: Time spent on flushing to /var/log/journal/450e72adb0834fb8ba11d94afb2214c6 is 132.305ms for 993 entries. Jul 7 00:00:19.063625 systemd-journald[1126]: System Journal (/var/log/journal/450e72adb0834fb8ba11d94afb2214c6) is 8.0M, max 195.6M, 187.6M free. Jul 7 00:00:19.214716 systemd-journald[1126]: Received client request to flush runtime journal. Jul 7 00:00:19.214751 kernel: loop0: detected capacity change from 0 to 142488 Jul 7 00:00:19.071162 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 00:00:19.074108 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 00:00:19.075493 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 00:00:19.076798 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 00:00:19.079997 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 00:00:19.082200 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 00:00:19.089692 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 00:00:19.209766 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 00:00:19.212368 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 00:00:19.218670 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 00:00:19.224818 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 00:00:19.225632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 00:00:19.239945 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 00:00:19.240666 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 00:00:19.243131 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 7 00:00:19.246466 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 00:00:19.250323 kernel: loop1: detected capacity change from 0 to 229808 Jul 7 00:00:19.257479 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 00:00:19.283332 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jul 7 00:00:19.283350 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jul 7 00:00:19.287490 kernel: loop2: detected capacity change from 0 to 140768 Jul 7 00:00:19.289098 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 00:00:19.445325 kernel: loop3: detected capacity change from 0 to 142488 Jul 7 00:00:19.457324 kernel: loop4: detected capacity change from 0 to 229808 Jul 7 00:00:19.464470 kernel: loop5: detected capacity change from 0 to 140768 Jul 7 00:00:19.471474 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 00:00:19.472194 (sd-merge)[1197]: Merged extensions into '/usr'. Jul 7 00:00:19.477907 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 00:00:19.477931 systemd[1]: Reloading... Jul 7 00:00:19.588373 zram_generator::config[1219]: No configuration found. Jul 7 00:00:19.680090 ldconfig[1168]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 00:00:19.729424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:00:19.778396 systemd[1]: Reloading finished in 299 ms. Jul 7 00:00:19.810614 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 00:00:19.812086 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 00:00:19.830469 systemd[1]: Starting ensure-sysext.service... Jul 7 00:00:19.832497 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 00:00:19.840153 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jul 7 00:00:19.840174 systemd[1]: Reloading... Jul 7 00:00:19.888047 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 00:00:19.888446 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 00:00:19.889439 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 00:00:19.889738 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jul 7 00:00:19.889816 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jul 7 00:00:19.894509 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:00:19.895414 systemd-tmpfiles[1261]: Skipping /boot Jul 7 00:00:19.910035 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 00:00:19.910124 systemd-tmpfiles[1261]: Skipping /boot Jul 7 00:00:19.910337 zram_generator::config[1288]: No configuration found. Jul 7 00:00:20.022918 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:00:20.072259 systemd[1]: Reloading finished in 231 ms. Jul 7 00:00:20.091681 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 00:00:20.104772 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 00:00:20.113607 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:00:20.116479 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 00:00:20.120198 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 00:00:20.127262 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 00:00:20.131228 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 00:00:20.140582 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 00:00:20.145269 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:20.145589 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:00:20.148854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:00:20.155857 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:00:20.162057 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:00:20.164777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:00:20.173760 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 00:00:20.175349 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:20.179881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:00:20.180231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:00:20.188346 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:00:20.188608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:00:20.191886 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 00:00:20.194124 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:00:20.194465 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:00:20.205210 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jul 7 00:00:20.207406 augenrules[1356]: No rules Jul 7 00:00:20.207830 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 00:00:20.209842 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:00:20.215967 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:20.216199 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:00:20.223655 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:00:20.226368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:00:20.229941 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:00:20.231269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:00:20.235476 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 00:00:20.236699 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:20.237708 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 00:00:20.239294 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 00:00:20.240962 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 00:00:20.242756 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:00:20.243270 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:00:20.244844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:00:20.245077 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:00:20.247261 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:00:20.247576 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:00:20.255817 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 00:00:20.271542 systemd[1]: Finished ensure-sysext.service. Jul 7 00:00:20.278831 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:20.278997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 00:00:20.290525 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 00:00:20.378917 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 00:00:20.383425 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 00:00:20.385643 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 00:00:20.386737 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 00:00:20.389504 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 00:00:20.392790 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1386) Jul 7 00:00:20.396435 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 00:00:20.397911 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 00:00:20.397931 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 00:00:20.398491 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 00:00:20.400391 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 00:00:20.401835 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 00:00:20.402006 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 00:00:20.402352 systemd-resolved[1331]: Positive Trust Anchors: Jul 7 00:00:20.402370 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 00:00:20.402412 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 00:00:20.403431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 00:00:20.403586 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 00:00:20.445645 systemd-resolved[1331]: Defaulting to hostname 'linux'. Jul 7 00:00:20.447006 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 00:00:20.450276 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 00:00:20.451796 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 00:00:20.453024 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 00:00:20.464733 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 00:00:20.464939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 00:00:20.474995 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 00:00:20.489325 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 7 00:00:20.510789 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 7 00:00:20.516153 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 00:00:20.516368 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 7 00:00:20.516589 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 00:00:20.522383 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jul 7 00:00:20.532351 kernel: ACPI: button: Power Button [PWRF] Jul 7 00:00:20.535832 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 00:00:20.545466 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 00:00:20.561117 systemd-networkd[1404]: lo: Link UP Jul 7 00:00:20.561131 systemd-networkd[1404]: lo: Gained carrier Jul 7 00:00:20.564153 systemd-networkd[1404]: Enumeration completed Jul 7 00:00:20.564240 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 00:00:20.565515 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 00:00:20.566694 systemd[1]: Reached target network.target - Network. Jul 7 00:00:20.568095 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 00:00:20.570257 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:20.570270 systemd-networkd[1404]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 00:00:20.573473 systemd-networkd[1404]: eth0: Link UP Jul 7 00:00:20.573496 systemd-networkd[1404]: eth0: Gained carrier Jul 7 00:00:20.573508 systemd-networkd[1404]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 00:00:20.784733 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 00:00:20.789128 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 00:00:20.801453 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 00:00:20.810784 systemd-networkd[1404]: eth0: DHCPv4 address 10.0.0.146/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 00:00:20.812329 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jul 7 00:00:21.231655 kernel: kvm_amd: TSC scaling supported Jul 7 00:00:21.231675 kernel: kvm_amd: Nested Virtualization enabled Jul 7 00:00:21.231688 kernel: kvm_amd: Nested Paging enabled Jul 7 00:00:21.231701 kernel: kvm_amd: LBR virtualization supported Jul 7 00:00:21.231713 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 7 00:00:21.231730 kernel: kvm_amd: Virtual GIF supported Jul 7 00:00:21.214376 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 00:00:21.214418 systemd-timesyncd[1406]: Initial clock synchronization to Mon 2025-07-07 00:00:21.214277 UTC. Jul 7 00:00:21.214515 systemd-resolved[1331]: Clock change detected. Flushing caches. Jul 7 00:00:21.231737 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 00:00:21.258549 kernel: EDAC MC: Ver: 3.0.0 Jul 7 00:00:21.292964 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 00:00:21.301473 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 00:00:21.328100 lvm[1430]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:00:21.330996 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 00:00:21.366229 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 00:00:21.367674 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 00:00:21.368762 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 00:00:21.370075 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 00:00:21.371362 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 00:00:21.372858 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 00:00:21.374086 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 00:00:21.375386 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 00:00:21.376578 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 00:00:21.376606 systemd[1]: Reached target paths.target - Path Units. Jul 7 00:00:21.377775 systemd[1]: Reached target timers.target - Timer Units. Jul 7 00:00:21.379541 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 00:00:21.382124 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 00:00:21.392707 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 00:00:21.395255 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 00:00:21.396963 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 00:00:21.398087 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 00:00:21.399175 systemd[1]: Reached target basic.target - Basic System. Jul 7 00:00:21.400116 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:00:21.400141 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 00:00:21.401487 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 00:00:21.403537 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 00:00:21.408323 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 00:00:21.408631 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 00:00:21.414369 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 00:00:21.415470 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 00:00:21.423396 jq[1439]: false Jul 7 00:00:21.431535 dbus-daemon[1438]: [system] SELinux support is enabled Jul 7 00:00:21.431583 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 00:00:21.436416 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 00:00:21.440591 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 00:00:21.446516 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 00:00:21.452525 extend-filesystems[1440]: Found loop3 Jul 7 00:00:21.452525 extend-filesystems[1440]: Found loop4 Jul 7 00:00:21.462595 extend-filesystems[1440]: Found loop5 Jul 7 00:00:21.462595 extend-filesystems[1440]: Found sr0 Jul 7 00:00:21.462595 extend-filesystems[1440]: Found vda Jul 7 00:00:21.462595 extend-filesystems[1440]: Found vda1 Jul 7 00:00:21.462595 extend-filesystems[1440]: Found vda2 Jul 7 00:00:21.462595 extend-filesystems[1440]: Found vda3 Jul 7 00:00:21.462595 extend-filesystems[1440]: Found usr Jul 7 00:00:21.462595 extend-filesystems[1440]: Found vda4 Jul 7 00:00:21.462595 extend-filesystems[1440]: Found vda6 Jul 7 00:00:21.462595 extend-filesystems[1440]: Found vda7 Jul 7 00:00:21.462595 extend-filesystems[1440]: Found vda9 Jul 7 00:00:21.462595 extend-filesystems[1440]: Checking size of /dev/vda9 Jul 7 00:00:21.491497 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 00:00:21.491838 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1390) Jul 7 00:00:21.463464 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 00:00:21.495335 extend-filesystems[1440]: Resized partition /dev/vda9 Jul 7 00:00:21.464426 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 00:00:21.499077 extend-filesystems[1459]: resize2fs 1.47.1 (20-May-2024) Jul 7 00:00:21.464952 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 00:00:21.500744 update_engine[1457]: I20250707 00:00:21.493649 1457 main.cc:92] Flatcar Update Engine starting Jul 7 00:00:21.500744 update_engine[1457]: I20250707 00:00:21.495100 1457 update_check_scheduler.cc:74] Next update check in 5m26s Jul 7 00:00:21.465748 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 00:00:21.501154 jq[1460]: true Jul 7 00:00:21.470470 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 00:00:21.474599 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 00:00:21.485206 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 00:00:21.493944 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 00:00:21.494209 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 00:00:21.494578 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 00:00:21.495108 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 00:00:21.501137 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 00:00:21.501521 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 00:00:21.516280 jq[1464]: true Jul 7 00:00:21.595290 (ntainerd)[1465]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 00:00:21.596452 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 00:00:21.606217 tar[1463]: linux-amd64/LICENSE Jul 7 00:00:21.622958 tar[1463]: linux-amd64/helm Jul 7 00:00:21.608803 systemd[1]: Started update-engine.service - Update Engine. Jul 7 00:00:21.614790 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 00:00:21.614816 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 00:00:21.616144 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 00:00:21.616160 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 00:00:21.623620 extend-filesystems[1459]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 00:00:21.623620 extend-filesystems[1459]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 00:00:21.623620 extend-filesystems[1459]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 00:00:21.627758 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Jul 7 00:00:21.627807 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 00:00:21.630235 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Jul 7 00:00:21.631120 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 00:00:21.631366 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 00:00:21.631617 systemd-logind[1455]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 00:00:21.633231 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 00:00:21.633371 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 00:00:21.635395 systemd-logind[1455]: New seat seat0. Jul 7 00:00:21.637530 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 00:00:21.640336 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 00:00:21.696520 locksmithd[1493]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 00:00:21.980728 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 00:00:22.028264 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 00:00:22.038541 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 00:00:22.048515 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 00:00:22.048920 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 00:00:22.052144 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 00:00:22.165878 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 00:00:22.177614 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 00:00:22.179948 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 00:00:22.181239 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 00:00:22.226860 systemd-networkd[1404]: eth0: Gained IPv6LL Jul 7 00:00:22.244775 containerd[1465]: time="2025-07-07T00:00:22.244580297Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 00:00:22.271513 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 00:00:22.274465 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 00:00:22.277652 containerd[1465]: time="2025-07-07T00:00:22.277557870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:22.280554 containerd[1465]: time="2025-07-07T00:00:22.280122950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:00:22.280554 containerd[1465]: time="2025-07-07T00:00:22.280152084Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 00:00:22.280554 containerd[1465]: time="2025-07-07T00:00:22.280171531Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 00:00:22.280554 containerd[1465]: time="2025-07-07T00:00:22.280380983Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 00:00:22.280554 containerd[1465]: time="2025-07-07T00:00:22.280403606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:22.280554 containerd[1465]: time="2025-07-07T00:00:22.280476242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:00:22.280554 containerd[1465]: time="2025-07-07T00:00:22.280488635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:22.280881 containerd[1465]: time="2025-07-07T00:00:22.280860903Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:00:22.280935 containerd[1465]: time="2025-07-07T00:00:22.280922218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:22.280984 containerd[1465]: time="2025-07-07T00:00:22.280971511Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:00:22.281033 containerd[1465]: time="2025-07-07T00:00:22.281021204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:22.281170 containerd[1465]: time="2025-07-07T00:00:22.281155105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:22.281489 containerd[1465]: time="2025-07-07T00:00:22.281472470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 00:00:22.281674 containerd[1465]: time="2025-07-07T00:00:22.281652337Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 00:00:22.281725 containerd[1465]: time="2025-07-07T00:00:22.281713462Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 00:00:22.281870 containerd[1465]: time="2025-07-07T00:00:22.281854727Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 00:00:22.281985 containerd[1465]: time="2025-07-07T00:00:22.281970414Z" level=info msg="metadata content store policy set" policy=shared Jul 7 00:00:22.282826 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 00:00:22.284698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:00:22.287488 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 00:00:22.292825 containerd[1465]: time="2025-07-07T00:00:22.292763851Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 00:00:22.292929 containerd[1465]: time="2025-07-07T00:00:22.292872986Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 00:00:22.292929 containerd[1465]: time="2025-07-07T00:00:22.292895147Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 00:00:22.292929 containerd[1465]: time="2025-07-07T00:00:22.292923370Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 00:00:22.292982 containerd[1465]: time="2025-07-07T00:00:22.292940422Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 00:00:22.293151 containerd[1465]: time="2025-07-07T00:00:22.293118586Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.296442529Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298452538Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298470742Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298484017Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298500337Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298518461Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298533780Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298550331Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298565840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298581048Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298595976Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298610233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298640851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300370 containerd[1465]: time="2025-07-07T00:00:22.298656550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298668292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298687007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298702456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298715521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298727934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298742171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298755395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298771856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298784500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298797685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298815749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298829955Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298863127Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298875100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300690 containerd[1465]: time="2025-07-07T00:00:22.298887904Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 00:00:22.300970 containerd[1465]: time="2025-07-07T00:00:22.299009271Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 00:00:22.300970 containerd[1465]: time="2025-07-07T00:00:22.299035050Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 00:00:22.300970 containerd[1465]: time="2025-07-07T00:00:22.299149063Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 00:00:22.300970 containerd[1465]: time="2025-07-07T00:00:22.299163951Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 00:00:22.300970 containerd[1465]: time="2025-07-07T00:00:22.299175142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.300970 containerd[1465]: time="2025-07-07T00:00:22.299192846Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 00:00:22.300970 containerd[1465]: time="2025-07-07T00:00:22.299211290Z" level=info msg="NRI interface is disabled by configuration." Jul 7 00:00:22.300970 containerd[1465]: time="2025-07-07T00:00:22.299226679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 00:00:22.301115 containerd[1465]: time="2025-07-07T00:00:22.299631017Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 00:00:22.301115 containerd[1465]: time="2025-07-07T00:00:22.299684507Z" level=info msg="Connect containerd service" Jul 7 00:00:22.301115 containerd[1465]: time="2025-07-07T00:00:22.299723030Z" level=info msg="using legacy CRI server" Jul 7 00:00:22.301115 containerd[1465]: time="2025-07-07T00:00:22.299732718Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 00:00:22.301115 containerd[1465]: time="2025-07-07T00:00:22.299852673Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 00:00:22.301115 containerd[1465]: time="2025-07-07T00:00:22.300602258Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 00:00:22.301115 containerd[1465]: time="2025-07-07T00:00:22.300773199Z" level=info msg="Start subscribing containerd event" Jul 7 00:00:22.301115 containerd[1465]: time="2025-07-07T00:00:22.300875801Z" level=info msg="Start recovering state" Jul 7 00:00:22.301115 containerd[1465]: time="2025-07-07T00:00:22.301069294Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 00:00:22.301401 containerd[1465]: time="2025-07-07T00:00:22.301127012Z" level=info msg="Start event monitor" Jul 7 00:00:22.301401 containerd[1465]: time="2025-07-07T00:00:22.301146398Z" level=info msg="Start snapshots syncer" Jul 7 00:00:22.301401 containerd[1465]: time="2025-07-07T00:00:22.301157279Z" level=info msg="Start cni network conf syncer for default" Jul 7 00:00:22.301401 containerd[1465]: time="2025-07-07T00:00:22.301165665Z" level=info msg="Start streaming server" Jul 7 00:00:22.301884 containerd[1465]: time="2025-07-07T00:00:22.301788583Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 00:00:22.301930 containerd[1465]: time="2025-07-07T00:00:22.301906914Z" level=info msg="containerd successfully booted in 0.058810s" Jul 7 00:00:22.357871 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 00:00:22.359841 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 00:00:22.385421 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 00:00:22.385669 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 00:00:22.387585 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 00:00:22.674042 tar[1463]: linux-amd64/README.md Jul 7 00:00:22.689404 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 00:00:24.064772 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:00:24.066787 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 00:00:24.068370 systemd[1]: Startup finished in 683ms (kernel) + 5.416s (initrd) + 5.591s (userspace) = 11.692s. Jul 7 00:00:24.072283 (kubelet)[1552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:00:24.751817 kubelet[1552]: E0707 00:00:24.751712 1552 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:00:24.757092 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:00:24.757289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:00:24.757769 systemd[1]: kubelet.service: Consumed 2.303s CPU time. Jul 7 00:00:26.526662 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 00:00:26.528076 systemd[1]: Started sshd@0-10.0.0.146:22-10.0.0.1:37254.service - OpenSSH per-connection server daemon (10.0.0.1:37254). Jul 7 00:00:26.574670 sshd[1566]: Accepted publickey for core from 10.0.0.1 port 37254 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:00:26.577102 sshd[1566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:26.587476 systemd-logind[1455]: New session 1 of user core. Jul 7 00:00:26.589129 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 00:00:26.602716 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 00:00:26.616556 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 00:00:26.629666 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 00:00:26.633016 (systemd)[1570]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 00:00:26.758542 systemd[1570]: Queued start job for default target default.target. Jul 7 00:00:26.767958 systemd[1570]: Created slice app.slice - User Application Slice. Jul 7 00:00:26.767990 systemd[1570]: Reached target paths.target - Paths. Jul 7 00:00:26.768005 systemd[1570]: Reached target timers.target - Timers. Jul 7 00:00:26.769831 systemd[1570]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 00:00:26.785024 systemd[1570]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 00:00:26.785193 systemd[1570]: Reached target sockets.target - Sockets. Jul 7 00:00:26.785216 systemd[1570]: Reached target basic.target - Basic System. Jul 7 00:00:26.785261 systemd[1570]: Reached target default.target - Main User Target. Jul 7 00:00:26.785303 systemd[1570]: Startup finished in 144ms. Jul 7 00:00:26.785676 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 00:00:26.787196 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 00:00:26.863816 systemd[1]: Started sshd@1-10.0.0.146:22-10.0.0.1:37260.service - OpenSSH per-connection server daemon (10.0.0.1:37260). Jul 7 00:00:26.892573 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 37260 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:00:26.894469 sshd[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:26.899082 systemd-logind[1455]: New session 2 of user core. Jul 7 00:00:26.909587 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 00:00:26.965857 sshd[1581]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:26.980506 systemd[1]: sshd@1-10.0.0.146:22-10.0.0.1:37260.service: Deactivated successfully. Jul 7 00:00:26.982063 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 00:00:26.983375 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Jul 7 00:00:26.993539 systemd[1]: Started sshd@2-10.0.0.146:22-10.0.0.1:37268.service - OpenSSH per-connection server daemon (10.0.0.1:37268). Jul 7 00:00:26.994495 systemd-logind[1455]: Removed session 2. Jul 7 00:00:27.019121 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 37268 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:00:27.020572 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:27.025361 systemd-logind[1455]: New session 3 of user core. Jul 7 00:00:27.038453 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 00:00:27.087805 sshd[1588]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:27.102787 systemd[1]: sshd@2-10.0.0.146:22-10.0.0.1:37268.service: Deactivated successfully. Jul 7 00:00:27.105105 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 00:00:27.106956 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Jul 7 00:00:27.119778 systemd[1]: Started sshd@3-10.0.0.146:22-10.0.0.1:37272.service - OpenSSH per-connection server daemon (10.0.0.1:37272). Jul 7 00:00:27.120777 systemd-logind[1455]: Removed session 3. Jul 7 00:00:27.146728 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 37272 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:00:27.148107 sshd[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:27.152354 systemd-logind[1455]: New session 4 of user core. Jul 7 00:00:27.163537 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 00:00:27.218727 sshd[1595]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:27.226551 systemd[1]: sshd@3-10.0.0.146:22-10.0.0.1:37272.service: Deactivated successfully. Jul 7 00:00:27.228969 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 00:00:27.230867 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Jul 7 00:00:27.240698 systemd[1]: Started sshd@4-10.0.0.146:22-10.0.0.1:37278.service - OpenSSH per-connection server daemon (10.0.0.1:37278). Jul 7 00:00:27.242068 systemd-logind[1455]: Removed session 4. Jul 7 00:00:27.267654 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 37278 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:00:27.269169 sshd[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:27.273052 systemd-logind[1455]: New session 5 of user core. Jul 7 00:00:27.282444 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 00:00:27.341497 sudo[1605]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 00:00:27.341857 sudo[1605]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:00:27.502843 sudo[1605]: pam_unix(sudo:session): session closed for user root Jul 7 00:00:27.505066 sshd[1602]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:27.523188 systemd[1]: sshd@4-10.0.0.146:22-10.0.0.1:37278.service: Deactivated successfully. Jul 7 00:00:27.525095 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 00:00:27.526858 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Jul 7 00:00:27.543558 systemd[1]: Started sshd@5-10.0.0.146:22-10.0.0.1:37280.service - OpenSSH per-connection server daemon (10.0.0.1:37280). Jul 7 00:00:27.544424 systemd-logind[1455]: Removed session 5. Jul 7 00:00:27.574557 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 37280 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:00:27.576106 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:27.580647 systemd-logind[1455]: New session 6 of user core. Jul 7 00:00:27.590446 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 00:00:27.644452 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 00:00:27.644782 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:00:27.649137 sudo[1614]: pam_unix(sudo:session): session closed for user root Jul 7 00:00:27.654970 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 00:00:27.655359 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:00:27.674618 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 00:00:27.676887 auditctl[1617]: No rules Jul 7 00:00:27.678176 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 00:00:27.678480 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 00:00:27.680484 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 00:00:27.714063 augenrules[1635]: No rules Jul 7 00:00:27.715839 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 00:00:27.717327 sudo[1613]: pam_unix(sudo:session): session closed for user root Jul 7 00:00:27.719236 sshd[1610]: pam_unix(sshd:session): session closed for user core Jul 7 00:00:27.727245 systemd[1]: sshd@5-10.0.0.146:22-10.0.0.1:37280.service: Deactivated successfully. Jul 7 00:00:27.728839 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 00:00:27.730677 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Jul 7 00:00:27.732090 systemd[1]: Started sshd@6-10.0.0.146:22-10.0.0.1:37288.service - OpenSSH per-connection server daemon (10.0.0.1:37288). Jul 7 00:00:27.733068 systemd-logind[1455]: Removed session 6. Jul 7 00:00:27.765473 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 37288 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:00:27.766917 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:00:27.771085 systemd-logind[1455]: New session 7 of user core. Jul 7 00:00:27.780456 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 00:00:27.833924 sudo[1646]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 00:00:27.834246 sudo[1646]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 00:00:28.501522 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 00:00:28.501693 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 00:00:29.193025 dockerd[1664]: time="2025-07-07T00:00:29.192936072Z" level=info msg="Starting up" Jul 7 00:00:29.651391 dockerd[1664]: time="2025-07-07T00:00:29.651330021Z" level=info msg="Loading containers: start." Jul 7 00:00:29.810353 kernel: Initializing XFRM netlink socket Jul 7 00:00:29.894145 systemd-networkd[1404]: docker0: Link UP Jul 7 00:00:29.916624 dockerd[1664]: time="2025-07-07T00:00:29.916523428Z" level=info msg="Loading containers: done." Jul 7 00:00:29.941912 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3490078861-merged.mount: Deactivated successfully. Jul 7 00:00:29.945080 dockerd[1664]: time="2025-07-07T00:00:29.945022955Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 00:00:29.945202 dockerd[1664]: time="2025-07-07T00:00:29.945170612Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 00:00:29.945351 dockerd[1664]: time="2025-07-07T00:00:29.945331524Z" level=info msg="Daemon has completed initialization" Jul 7 00:00:29.985392 dockerd[1664]: time="2025-07-07T00:00:29.985241875Z" level=info msg="API listen on /run/docker.sock" Jul 7 00:00:29.985690 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 00:00:30.714159 containerd[1465]: time="2025-07-07T00:00:30.714110243Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 7 00:00:31.630268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1890632752.mount: Deactivated successfully. Jul 7 00:00:33.003291 containerd[1465]: time="2025-07-07T00:00:33.003229582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:33.003946 containerd[1465]: time="2025-07-07T00:00:33.003907302Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 7 00:00:33.005083 containerd[1465]: time="2025-07-07T00:00:33.005049464Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:33.008436 containerd[1465]: time="2025-07-07T00:00:33.008409233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:33.009393 containerd[1465]: time="2025-07-07T00:00:33.009348144Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 2.295193397s" Jul 7 00:00:33.009393 containerd[1465]: time="2025-07-07T00:00:33.009384622Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 7 00:00:33.010376 containerd[1465]: time="2025-07-07T00:00:33.010353800Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 7 00:00:34.537697 containerd[1465]: time="2025-07-07T00:00:34.537627688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:34.538331 containerd[1465]: time="2025-07-07T00:00:34.538281684Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 7 00:00:34.539447 containerd[1465]: time="2025-07-07T00:00:34.539402966Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:34.542014 containerd[1465]: time="2025-07-07T00:00:34.541983605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:34.543129 containerd[1465]: time="2025-07-07T00:00:34.543093216Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 1.532710242s" Jul 7 00:00:34.543183 containerd[1465]: time="2025-07-07T00:00:34.543128822Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 7 00:00:34.543702 containerd[1465]: time="2025-07-07T00:00:34.543672191Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 7 00:00:34.842480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 00:00:34.852477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:00:35.147554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:00:35.152022 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:00:35.310831 kubelet[1880]: E0707 00:00:35.310726 1880 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:00:35.318192 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:00:35.318449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:00:36.155414 containerd[1465]: time="2025-07-07T00:00:36.155341310Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:36.156337 containerd[1465]: time="2025-07-07T00:00:36.156250345Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 7 00:00:36.157577 containerd[1465]: time="2025-07-07T00:00:36.157546465Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:36.160231 containerd[1465]: time="2025-07-07T00:00:36.160197636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:36.161214 containerd[1465]: time="2025-07-07T00:00:36.161173946Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.617470807s" Jul 7 00:00:36.161214 containerd[1465]: time="2025-07-07T00:00:36.161209824Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 7 00:00:36.161837 containerd[1465]: time="2025-07-07T00:00:36.161710723Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 7 00:00:37.444940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2516488645.mount: Deactivated successfully. Jul 7 00:00:38.463749 containerd[1465]: time="2025-07-07T00:00:38.463662362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:38.464640 containerd[1465]: time="2025-07-07T00:00:38.464593628Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 7 00:00:38.465782 containerd[1465]: time="2025-07-07T00:00:38.465745147Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:38.467809 containerd[1465]: time="2025-07-07T00:00:38.467760135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:38.468723 containerd[1465]: time="2025-07-07T00:00:38.468408901Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 2.306655689s" Jul 7 00:00:38.468723 containerd[1465]: time="2025-07-07T00:00:38.468660142Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 7 00:00:38.469283 containerd[1465]: time="2025-07-07T00:00:38.469249898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 7 00:00:38.976552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930733241.mount: Deactivated successfully. Jul 7 00:00:39.714459 containerd[1465]: time="2025-07-07T00:00:39.714397790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:39.715176 containerd[1465]: time="2025-07-07T00:00:39.715115546Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 7 00:00:39.716278 containerd[1465]: time="2025-07-07T00:00:39.716248290Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:39.719600 containerd[1465]: time="2025-07-07T00:00:39.719565580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:39.720851 containerd[1465]: time="2025-07-07T00:00:39.720784746Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.251341044s" Jul 7 00:00:39.720851 containerd[1465]: time="2025-07-07T00:00:39.720835230Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 7 00:00:39.721436 containerd[1465]: time="2025-07-07T00:00:39.721417743Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 00:00:40.219762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4264109841.mount: Deactivated successfully. Jul 7 00:00:40.225007 containerd[1465]: time="2025-07-07T00:00:40.224947661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:40.225671 containerd[1465]: time="2025-07-07T00:00:40.225626053Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 00:00:40.226713 containerd[1465]: time="2025-07-07T00:00:40.226670221Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:40.228887 containerd[1465]: time="2025-07-07T00:00:40.228837905Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:40.229588 containerd[1465]: time="2025-07-07T00:00:40.229534311Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 508.089818ms" Jul 7 00:00:40.229588 containerd[1465]: time="2025-07-07T00:00:40.229573875Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 00:00:40.230155 containerd[1465]: time="2025-07-07T00:00:40.230126782Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 7 00:00:41.611183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172457353.mount: Deactivated successfully. Jul 7 00:00:44.047961 containerd[1465]: time="2025-07-07T00:00:44.047865461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:44.048601 containerd[1465]: time="2025-07-07T00:00:44.048540236Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 7 00:00:44.049978 containerd[1465]: time="2025-07-07T00:00:44.049944008Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:44.053392 containerd[1465]: time="2025-07-07T00:00:44.053365233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:00:44.054509 containerd[1465]: time="2025-07-07T00:00:44.054439848Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.824276267s" Jul 7 00:00:44.054509 containerd[1465]: time="2025-07-07T00:00:44.054499370Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 7 00:00:45.342449 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 00:00:45.355458 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:00:45.516990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:00:45.521841 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 00:00:45.639544 kubelet[2043]: E0707 00:00:45.639354 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 00:00:45.644337 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 00:00:45.644658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 00:00:46.826013 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:00:46.847633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:00:46.872250 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-7.scope)... Jul 7 00:00:46.872267 systemd[1]: Reloading... Jul 7 00:00:46.959395 zram_generator::config[2101]: No configuration found. Jul 7 00:00:47.635924 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:00:47.714636 systemd[1]: Reloading finished in 841 ms. Jul 7 00:00:47.765433 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 00:00:47.765527 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 00:00:47.765819 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:00:47.768713 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:00:47.933404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:00:47.938673 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:00:47.989992 kubelet[2147]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:00:47.989992 kubelet[2147]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:00:47.989992 kubelet[2147]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:00:47.990460 kubelet[2147]: I0707 00:00:47.990033 2147 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:00:48.550083 kubelet[2147]: I0707 00:00:48.550026 2147 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 00:00:48.550083 kubelet[2147]: I0707 00:00:48.550064 2147 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:00:48.550370 kubelet[2147]: I0707 00:00:48.550351 2147 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 00:00:48.609653 kubelet[2147]: E0707 00:00:48.609591 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 00:00:48.609875 kubelet[2147]: I0707 00:00:48.609850 2147 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:00:48.617014 kubelet[2147]: E0707 00:00:48.616979 2147 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:00:48.617014 kubelet[2147]: I0707 00:00:48.617014 2147 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:00:48.622763 kubelet[2147]: I0707 00:00:48.622738 2147 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:00:48.623042 kubelet[2147]: I0707 00:00:48.623002 2147 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:00:48.623279 kubelet[2147]: I0707 00:00:48.623033 2147 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:00:48.623431 kubelet[2147]: I0707 00:00:48.623290 2147 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:00:48.623431 kubelet[2147]: I0707 00:00:48.623317 2147 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 00:00:48.624239 kubelet[2147]: I0707 00:00:48.624206 2147 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:00:48.626357 kubelet[2147]: I0707 00:00:48.626325 2147 kubelet.go:480] "Attempting to sync node with API server" Jul 7 00:00:48.626357 kubelet[2147]: I0707 00:00:48.626347 2147 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:00:48.626426 kubelet[2147]: I0707 00:00:48.626391 2147 kubelet.go:386] "Adding apiserver pod source" Jul 7 00:00:48.626426 kubelet[2147]: I0707 00:00:48.626423 2147 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:00:48.634171 kubelet[2147]: E0707 00:00:48.633750 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 00:00:48.634171 kubelet[2147]: I0707 00:00:48.633880 2147 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:00:48.634445 kubelet[2147]: E0707 00:00:48.634416 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 00:00:48.634445 kubelet[2147]: I0707 00:00:48.634437 2147 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 00:00:48.635342 kubelet[2147]: W0707 00:00:48.635323 2147 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 00:00:48.638099 kubelet[2147]: I0707 00:00:48.638073 2147 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:00:48.638166 kubelet[2147]: I0707 00:00:48.638137 2147 server.go:1289] "Started kubelet" Jul 7 00:00:48.638397 kubelet[2147]: I0707 00:00:48.638347 2147 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:00:48.640298 kubelet[2147]: I0707 00:00:48.640271 2147 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:00:48.640358 kubelet[2147]: I0707 00:00:48.640267 2147 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:00:48.641590 kubelet[2147]: I0707 00:00:48.641567 2147 server.go:317] "Adding debug handlers to kubelet server" Jul 7 00:00:48.643912 kubelet[2147]: I0707 00:00:48.643881 2147 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:00:48.643989 kubelet[2147]: I0707 00:00:48.643931 2147 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:00:48.644243 kubelet[2147]: E0707 00:00:48.643028 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.146:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.146:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcf10381a75ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 00:00:48.638096814 +0000 UTC m=+0.679851252,LastTimestamp:2025-07-07 00:00:48.638096814 +0000 UTC m=+0.679851252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 00:00:48.644431 kubelet[2147]: I0707 00:00:48.644414 2147 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:00:48.644481 kubelet[2147]: E0707 00:00:48.644455 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:00:48.645049 kubelet[2147]: E0707 00:00:48.645016 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="200ms" Jul 7 00:00:48.645805 kubelet[2147]: E0707 00:00:48.645415 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 00:00:48.645805 kubelet[2147]: I0707 00:00:48.645454 2147 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:00:48.645805 kubelet[2147]: I0707 00:00:48.645545 2147 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:00:48.646662 kubelet[2147]: I0707 00:00:48.646635 2147 factory.go:223] Registration of the systemd container factory successfully Jul 7 00:00:48.646786 kubelet[2147]: I0707 00:00:48.646763 2147 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:00:48.647467 kubelet[2147]: E0707 00:00:48.647444 2147 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:00:48.648027 kubelet[2147]: I0707 00:00:48.648012 2147 factory.go:223] Registration of the containerd container factory successfully Jul 7 00:00:48.651468 kubelet[2147]: I0707 00:00:48.651427 2147 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 00:00:48.707422 kubelet[2147]: I0707 00:00:48.707105 2147 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:00:48.707422 kubelet[2147]: I0707 00:00:48.707133 2147 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:00:48.707422 kubelet[2147]: I0707 00:00:48.707157 2147 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:00:48.710801 kubelet[2147]: I0707 00:00:48.710769 2147 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 00:00:48.710868 kubelet[2147]: I0707 00:00:48.710816 2147 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 00:00:48.710868 kubelet[2147]: I0707 00:00:48.710845 2147 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:00:48.710868 kubelet[2147]: I0707 00:00:48.710859 2147 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 00:00:48.710950 kubelet[2147]: E0707 00:00:48.710906 2147 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:00:48.745105 kubelet[2147]: E0707 00:00:48.745048 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:00:48.811499 kubelet[2147]: E0707 00:00:48.811361 2147 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:00:48.845616 kubelet[2147]: E0707 00:00:48.845575 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:00:48.845919 kubelet[2147]: E0707 00:00:48.845886 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="400ms" Jul 7 00:00:48.946145 kubelet[2147]: E0707 00:00:48.946096 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:00:49.012482 kubelet[2147]: E0707 00:00:49.012415 2147 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 00:00:49.046719 kubelet[2147]: E0707 00:00:49.046677 2147 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 00:00:49.089393 kubelet[2147]: E0707 00:00:49.089246 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 00:00:49.090155 kubelet[2147]: I0707 00:00:49.090122 2147 policy_none.go:49] "None policy: Start" Jul 7 00:00:49.090197 kubelet[2147]: I0707 00:00:49.090157 2147 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:00:49.090197 kubelet[2147]: I0707 00:00:49.090185 2147 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:00:49.104210 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 00:00:49.125514 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 00:00:49.128845 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 00:00:49.139225 kubelet[2147]: E0707 00:00:49.139183 2147 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 00:00:49.139495 kubelet[2147]: I0707 00:00:49.139477 2147 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:00:49.139552 kubelet[2147]: I0707 00:00:49.139499 2147 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:00:49.139972 kubelet[2147]: I0707 00:00:49.139828 2147 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:00:49.140683 kubelet[2147]: E0707 00:00:49.140649 2147 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:00:49.140744 kubelet[2147]: E0707 00:00:49.140689 2147 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 00:00:49.241685 kubelet[2147]: I0707 00:00:49.241640 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:00:49.242054 kubelet[2147]: E0707 00:00:49.242030 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Jul 7 00:00:49.246617 kubelet[2147]: E0707 00:00:49.246590 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="800ms" Jul 7 00:00:49.443486 kubelet[2147]: I0707 00:00:49.443447 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:00:49.443929 kubelet[2147]: E0707 00:00:49.443892 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Jul 7 00:00:49.450335 kubelet[2147]: I0707 00:00:49.450236 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/216aeeb361f165ee6a2af035428d359a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"216aeeb361f165ee6a2af035428d359a\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:49.450405 kubelet[2147]: I0707 00:00:49.450339 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/216aeeb361f165ee6a2af035428d359a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"216aeeb361f165ee6a2af035428d359a\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:49.450429 kubelet[2147]: I0707 00:00:49.450403 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/216aeeb361f165ee6a2af035428d359a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"216aeeb361f165ee6a2af035428d359a\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:49.624745 kubelet[2147]: E0707 00:00:49.624690 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.146:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 7 00:00:49.672992 kubelet[2147]: E0707 00:00:49.672950 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.146:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 7 00:00:49.847740 kubelet[2147]: I0707 00:00:49.847682 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:00:49.848033 kubelet[2147]: E0707 00:00:49.847999 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Jul 7 00:00:50.047578 kubelet[2147]: E0707 00:00:50.047521 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.146:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.146:6443: connect: connection refused" interval="1.6s" Jul 7 00:00:50.178368 kubelet[2147]: E0707 00:00:50.178218 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.146:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 7 00:00:50.188930 kubelet[2147]: E0707 00:00:50.188896 2147 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.146:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 7 00:00:50.279531 systemd[1]: Created slice kubepods-burstable-pod216aeeb361f165ee6a2af035428d359a.slice - libcontainer container kubepods-burstable-pod216aeeb361f165ee6a2af035428d359a.slice. Jul 7 00:00:50.286203 kubelet[2147]: E0707 00:00:50.286156 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:00:50.286566 kubelet[2147]: E0707 00:00:50.286541 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:50.287190 containerd[1465]: time="2025-07-07T00:00:50.287136563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:216aeeb361f165ee6a2af035428d359a,Namespace:kube-system,Attempt:0,}" Jul 7 00:00:50.291497 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 7 00:00:50.299685 kubelet[2147]: E0707 00:00:50.299631 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:00:50.302450 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 7 00:00:50.304209 kubelet[2147]: E0707 00:00:50.304188 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:00:50.355549 kubelet[2147]: I0707 00:00:50.355465 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:50.355549 kubelet[2147]: I0707 00:00:50.355501 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:50.355549 kubelet[2147]: I0707 00:00:50.355538 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:50.355549 kubelet[2147]: I0707 00:00:50.355558 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:50.355777 kubelet[2147]: I0707 00:00:50.355582 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:50.355777 kubelet[2147]: I0707 00:00:50.355604 2147 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:00:50.600699 kubelet[2147]: E0707 00:00:50.600668 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:50.601315 containerd[1465]: time="2025-07-07T00:00:50.601272568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 7 00:00:50.605487 kubelet[2147]: E0707 00:00:50.605466 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:50.605866 containerd[1465]: time="2025-07-07T00:00:50.605838579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 7 00:00:50.649251 kubelet[2147]: I0707 00:00:50.649213 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:00:50.649519 kubelet[2147]: E0707 00:00:50.649485 2147 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.146:6443/api/v1/nodes\": dial tcp 10.0.0.146:6443: connect: connection refused" node="localhost" Jul 7 00:00:50.780244 kubelet[2147]: E0707 00:00:50.780184 2147 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.146:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.146:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 7 00:00:50.826494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount507166679.mount: Deactivated successfully. Jul 7 00:00:50.834850 containerd[1465]: time="2025-07-07T00:00:50.834805241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:00:50.835769 containerd[1465]: time="2025-07-07T00:00:50.835733802Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:00:50.836588 containerd[1465]: time="2025-07-07T00:00:50.836566093Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:00:50.837494 containerd[1465]: time="2025-07-07T00:00:50.837448127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:00:50.838301 containerd[1465]: time="2025-07-07T00:00:50.838272943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 7 00:00:50.839222 containerd[1465]: time="2025-07-07T00:00:50.839182128Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 00:00:50.840018 containerd[1465]: time="2025-07-07T00:00:50.839990884Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:00:50.842618 containerd[1465]: time="2025-07-07T00:00:50.842588074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 00:00:50.844593 containerd[1465]: time="2025-07-07T00:00:50.844561815Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 238.66628ms" Jul 7 00:00:50.846413 containerd[1465]: time="2025-07-07T00:00:50.846385484Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.141059ms" Jul 7 00:00:50.847036 containerd[1465]: time="2025-07-07T00:00:50.847001299Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 245.629004ms" Jul 7 00:00:51.180507 containerd[1465]: time="2025-07-07T00:00:51.180384522Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:51.180507 containerd[1465]: time="2025-07-07T00:00:51.180447621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:51.180507 containerd[1465]: time="2025-07-07T00:00:51.180461657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:51.180754 containerd[1465]: time="2025-07-07T00:00:51.180567035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:51.181544 containerd[1465]: time="2025-07-07T00:00:51.181257479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:51.181544 containerd[1465]: time="2025-07-07T00:00:51.181343541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:51.181544 containerd[1465]: time="2025-07-07T00:00:51.181363107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:51.182528 containerd[1465]: time="2025-07-07T00:00:51.182430228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:51.183016 containerd[1465]: time="2025-07-07T00:00:51.181498982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:00:51.183095 containerd[1465]: time="2025-07-07T00:00:51.182996350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:00:51.183095 containerd[1465]: time="2025-07-07T00:00:51.183014924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:51.183196 containerd[1465]: time="2025-07-07T00:00:51.183111666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:00:51.237577 systemd[1]: Started cri-containerd-d6987cfef8fe6c6f21bab6f0055a0b39119f58c821280c58133c6b84b677a68c.scope - libcontainer container d6987cfef8fe6c6f21bab6f0055a0b39119f58c821280c58133c6b84b677a68c. Jul 7 00:00:51.244593 systemd[1]: Started cri-containerd-f601b15171b7d33010c0ae04a8d4ef238ba22e753e37c07eb606cacb54abd659.scope - libcontainer container f601b15171b7d33010c0ae04a8d4ef238ba22e753e37c07eb606cacb54abd659. Jul 7 00:00:51.251937 systemd[1]: Started cri-containerd-015922404f340d1f1d475584d2ebb88ce37602eab828a24ed72dcc17fff3f87e.scope - libcontainer container 015922404f340d1f1d475584d2ebb88ce37602eab828a24ed72dcc17fff3f87e. Jul 7 00:00:51.292337 containerd[1465]: time="2025-07-07T00:00:51.292273865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6987cfef8fe6c6f21bab6f0055a0b39119f58c821280c58133c6b84b677a68c\"" Jul 7 00:00:51.293629 kubelet[2147]: E0707 00:00:51.293536 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:51.299273 containerd[1465]: time="2025-07-07T00:00:51.299072462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:216aeeb361f165ee6a2af035428d359a,Namespace:kube-system,Attempt:0,} returns sandbox id \"f601b15171b7d33010c0ae04a8d4ef238ba22e753e37c07eb606cacb54abd659\"" Jul 7 00:00:51.299273 containerd[1465]: time="2025-07-07T00:00:51.299152382Z" level=info msg="CreateContainer within sandbox \"d6987cfef8fe6c6f21bab6f0055a0b39119f58c821280c58133c6b84b677a68c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 00:00:51.300841 kubelet[2147]: E0707 00:00:51.300813 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:51.305775 containerd[1465]: time="2025-07-07T00:00:51.305740304Z" level=info msg="CreateContainer within sandbox \"f601b15171b7d33010c0ae04a8d4ef238ba22e753e37c07eb606cacb54abd659\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 00:00:51.308122 containerd[1465]: time="2025-07-07T00:00:51.307952382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"015922404f340d1f1d475584d2ebb88ce37602eab828a24ed72dcc17fff3f87e\"" Jul 7 00:00:51.308735 kubelet[2147]: E0707 00:00:51.308711 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:51.314150 containerd[1465]: time="2025-07-07T00:00:51.314091984Z" level=info msg="CreateContainer within sandbox \"015922404f340d1f1d475584d2ebb88ce37602eab828a24ed72dcc17fff3f87e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 00:00:51.318351 containerd[1465]: time="2025-07-07T00:00:51.318315743Z" level=info msg="CreateContainer within sandbox \"d6987cfef8fe6c6f21bab6f0055a0b39119f58c821280c58133c6b84b677a68c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f5a341cf79cce526df0b4bcc10ff9dac669ccc0398cf418fad56b9983f9ce288\"" Jul 7 00:00:51.319750 containerd[1465]: time="2025-07-07T00:00:51.319663009Z" level=info msg="StartContainer for \"f5a341cf79cce526df0b4bcc10ff9dac669ccc0398cf418fad56b9983f9ce288\"" Jul 7 00:00:51.329118 containerd[1465]: time="2025-07-07T00:00:51.329078053Z" level=info msg="CreateContainer within sandbox \"f601b15171b7d33010c0ae04a8d4ef238ba22e753e37c07eb606cacb54abd659\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"905d0e8a13dbd56f28ad9817c56e6989187205be276072d07db67c83fbaee63d\"" Jul 7 00:00:51.329557 containerd[1465]: time="2025-07-07T00:00:51.329526013Z" level=info msg="StartContainer for \"905d0e8a13dbd56f28ad9817c56e6989187205be276072d07db67c83fbaee63d\"" Jul 7 00:00:51.342815 containerd[1465]: time="2025-07-07T00:00:51.342696377Z" level=info msg="CreateContainer within sandbox \"015922404f340d1f1d475584d2ebb88ce37602eab828a24ed72dcc17fff3f87e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d1f43b8191b698d619b1ead9b21b722938b8aca957abc7bb76706068627807ae\"" Jul 7 00:00:51.344294 containerd[1465]: time="2025-07-07T00:00:51.343244805Z" level=info msg="StartContainer for \"d1f43b8191b698d619b1ead9b21b722938b8aca957abc7bb76706068627807ae\"" Jul 7 00:00:51.347442 systemd[1]: Started cri-containerd-f5a341cf79cce526df0b4bcc10ff9dac669ccc0398cf418fad56b9983f9ce288.scope - libcontainer container f5a341cf79cce526df0b4bcc10ff9dac669ccc0398cf418fad56b9983f9ce288. Jul 7 00:00:51.353295 systemd[1]: Started cri-containerd-905d0e8a13dbd56f28ad9817c56e6989187205be276072d07db67c83fbaee63d.scope - libcontainer container 905d0e8a13dbd56f28ad9817c56e6989187205be276072d07db67c83fbaee63d. Jul 7 00:00:51.376610 systemd[1]: Started cri-containerd-d1f43b8191b698d619b1ead9b21b722938b8aca957abc7bb76706068627807ae.scope - libcontainer container d1f43b8191b698d619b1ead9b21b722938b8aca957abc7bb76706068627807ae. Jul 7 00:00:51.399455 containerd[1465]: time="2025-07-07T00:00:51.398926178Z" level=info msg="StartContainer for \"f5a341cf79cce526df0b4bcc10ff9dac669ccc0398cf418fad56b9983f9ce288\" returns successfully" Jul 7 00:00:51.406145 containerd[1465]: time="2025-07-07T00:00:51.406112091Z" level=info msg="StartContainer for \"905d0e8a13dbd56f28ad9817c56e6989187205be276072d07db67c83fbaee63d\" returns successfully" Jul 7 00:00:51.421599 containerd[1465]: time="2025-07-07T00:00:51.421532264Z" level=info msg="StartContainer for \"d1f43b8191b698d619b1ead9b21b722938b8aca957abc7bb76706068627807ae\" returns successfully" Jul 7 00:00:51.719742 kubelet[2147]: E0707 00:00:51.719701 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:00:51.719871 kubelet[2147]: E0707 00:00:51.719827 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:51.721653 kubelet[2147]: E0707 00:00:51.721630 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:00:51.724128 kubelet[2147]: E0707 00:00:51.721728 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:51.724128 kubelet[2147]: E0707 00:00:51.723944 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:00:51.724128 kubelet[2147]: E0707 00:00:51.724046 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:52.253086 kubelet[2147]: I0707 00:00:52.252884 2147 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:00:52.727216 kubelet[2147]: E0707 00:00:52.726981 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:00:52.727216 kubelet[2147]: E0707 00:00:52.727112 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:52.730381 kubelet[2147]: E0707 00:00:52.730335 2147 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 00:00:52.730574 kubelet[2147]: E0707 00:00:52.730547 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:52.887563 kubelet[2147]: E0707 00:00:52.887477 2147 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 00:00:53.332838 kubelet[2147]: E0707 00:00:53.332432 2147 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184fcf10381a75ae default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 00:00:48.638096814 +0000 UTC m=+0.679851252,LastTimestamp:2025-07-07 00:00:48.638096814 +0000 UTC m=+0.679851252,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 00:00:53.335342 kubelet[2147]: I0707 00:00:53.333374 2147 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 00:00:53.347548 kubelet[2147]: I0707 00:00:53.347498 2147 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:53.356965 kubelet[2147]: E0707 00:00:53.356926 2147 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:53.356965 kubelet[2147]: I0707 00:00:53.356960 2147 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:53.359334 kubelet[2147]: E0707 00:00:53.359258 2147 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:53.359334 kubelet[2147]: I0707 00:00:53.359295 2147 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:00:53.360558 kubelet[2147]: E0707 00:00:53.360537 2147 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 00:00:53.632499 kubelet[2147]: I0707 00:00:53.632450 2147 apiserver.go:52] "Watching apiserver" Jul 7 00:00:53.645971 kubelet[2147]: I0707 00:00:53.645929 2147 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:00:55.382516 systemd[1]: Reloading requested from client PID 2431 ('systemctl') (unit session-7.scope)... Jul 7 00:00:55.382531 systemd[1]: Reloading... Jul 7 00:00:55.466342 zram_generator::config[2473]: No configuration found. Jul 7 00:00:55.575454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 00:00:55.667676 systemd[1]: Reloading finished in 284 ms. Jul 7 00:00:55.713827 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:00:55.737867 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 00:00:55.738146 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:00:55.738195 systemd[1]: kubelet.service: Consumed 1.375s CPU time, 131.8M memory peak, 0B memory swap peak. Jul 7 00:00:55.750883 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 00:00:55.915661 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 00:00:55.927272 (kubelet)[2515]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 00:00:55.995284 kubelet[2515]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:00:55.995284 kubelet[2515]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 00:00:55.995284 kubelet[2515]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 00:00:55.995676 kubelet[2515]: I0707 00:00:55.995338 2515 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 00:00:56.001333 kubelet[2515]: I0707 00:00:56.001280 2515 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 7 00:00:56.001333 kubelet[2515]: I0707 00:00:56.001321 2515 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 00:00:56.001576 kubelet[2515]: I0707 00:00:56.001555 2515 server.go:956] "Client rotation is on, will bootstrap in background" Jul 7 00:00:56.002756 kubelet[2515]: I0707 00:00:56.002733 2515 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 7 00:00:56.004855 kubelet[2515]: I0707 00:00:56.004812 2515 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 00:00:56.009144 kubelet[2515]: E0707 00:00:56.009113 2515 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 00:00:56.009144 kubelet[2515]: I0707 00:00:56.009143 2515 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 00:00:56.014004 kubelet[2515]: I0707 00:00:56.013973 2515 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 00:00:56.014315 kubelet[2515]: I0707 00:00:56.014273 2515 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 00:00:56.014468 kubelet[2515]: I0707 00:00:56.014319 2515 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 00:00:56.014545 kubelet[2515]: I0707 00:00:56.014475 2515 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 00:00:56.014545 kubelet[2515]: I0707 00:00:56.014486 2515 container_manager_linux.go:303] "Creating device plugin manager" Jul 7 00:00:56.014545 kubelet[2515]: I0707 00:00:56.014534 2515 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:00:56.014706 kubelet[2515]: I0707 00:00:56.014690 2515 kubelet.go:480] "Attempting to sync node with API server" Jul 7 00:00:56.014706 kubelet[2515]: I0707 00:00:56.014706 2515 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 00:00:56.014757 kubelet[2515]: I0707 00:00:56.014739 2515 kubelet.go:386] "Adding apiserver pod source" Jul 7 00:00:56.014792 kubelet[2515]: I0707 00:00:56.014779 2515 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 00:00:56.015918 kubelet[2515]: I0707 00:00:56.015898 2515 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 00:00:56.016431 kubelet[2515]: I0707 00:00:56.016396 2515 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 7 00:00:56.023159 kubelet[2515]: I0707 00:00:56.023119 2515 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 00:00:56.023220 kubelet[2515]: I0707 00:00:56.023194 2515 server.go:1289] "Started kubelet" Jul 7 00:00:56.023650 kubelet[2515]: I0707 00:00:56.023537 2515 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 00:00:56.023962 kubelet[2515]: I0707 00:00:56.023875 2515 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 00:00:56.025853 kubelet[2515]: I0707 00:00:56.024536 2515 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 00:00:56.026631 kubelet[2515]: I0707 00:00:56.026592 2515 server.go:317] "Adding debug handlers to kubelet server" Jul 7 00:00:56.033249 kubelet[2515]: I0707 00:00:56.030945 2515 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 00:00:56.033249 kubelet[2515]: I0707 00:00:56.032092 2515 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 00:00:56.033613 kubelet[2515]: E0707 00:00:56.033582 2515 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 00:00:56.034124 kubelet[2515]: I0707 00:00:56.034077 2515 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 00:00:56.034588 kubelet[2515]: I0707 00:00:56.034559 2515 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 00:00:56.034864 kubelet[2515]: I0707 00:00:56.034729 2515 reconciler.go:26] "Reconciler: start to sync state" Jul 7 00:00:56.037335 kubelet[2515]: I0707 00:00:56.037284 2515 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 00:00:56.039073 kubelet[2515]: I0707 00:00:56.039045 2515 factory.go:223] Registration of the containerd container factory successfully Jul 7 00:00:56.039177 kubelet[2515]: I0707 00:00:56.039163 2515 factory.go:223] Registration of the systemd container factory successfully Jul 7 00:00:56.048961 kubelet[2515]: I0707 00:00:56.048883 2515 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 7 00:00:56.050459 kubelet[2515]: I0707 00:00:56.050427 2515 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 7 00:00:56.050459 kubelet[2515]: I0707 00:00:56.050456 2515 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 7 00:00:56.050538 kubelet[2515]: I0707 00:00:56.050486 2515 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 00:00:56.050538 kubelet[2515]: I0707 00:00:56.050503 2515 kubelet.go:2436] "Starting kubelet main sync loop" Jul 7 00:00:56.050581 kubelet[2515]: E0707 00:00:56.050556 2515 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 00:00:56.079733 kubelet[2515]: I0707 00:00:56.079696 2515 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 00:00:56.079982 kubelet[2515]: I0707 00:00:56.079948 2515 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 00:00:56.080058 kubelet[2515]: I0707 00:00:56.080047 2515 state_mem.go:36] "Initialized new in-memory state store" Jul 7 00:00:56.080276 kubelet[2515]: I0707 00:00:56.080258 2515 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 00:00:56.080381 kubelet[2515]: I0707 00:00:56.080356 2515 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 00:00:56.080440 kubelet[2515]: I0707 00:00:56.080430 2515 policy_none.go:49] "None policy: Start" Jul 7 00:00:56.080497 kubelet[2515]: I0707 00:00:56.080486 2515 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 00:00:56.080759 kubelet[2515]: I0707 00:00:56.080715 2515 state_mem.go:35] "Initializing new in-memory state store" Jul 7 00:00:56.080949 kubelet[2515]: I0707 00:00:56.080934 2515 state_mem.go:75] "Updated machine memory state" Jul 7 00:00:56.085875 kubelet[2515]: E0707 00:00:56.085855 2515 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 7 00:00:56.086164 kubelet[2515]: I0707 00:00:56.086150 2515 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 00:00:56.086343 kubelet[2515]: I0707 00:00:56.086287 2515 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 00:00:56.086672 kubelet[2515]: I0707 00:00:56.086659 2515 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 00:00:56.087574 kubelet[2515]: E0707 00:00:56.087559 2515 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 00:00:56.152025 kubelet[2515]: I0707 00:00:56.151972 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:56.152190 kubelet[2515]: I0707 00:00:56.152086 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 00:00:56.152190 kubelet[2515]: I0707 00:00:56.152097 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:56.194333 kubelet[2515]: I0707 00:00:56.194195 2515 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 00:00:56.200581 kubelet[2515]: I0707 00:00:56.200535 2515 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 00:00:56.200713 kubelet[2515]: I0707 00:00:56.200620 2515 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 00:00:56.336456 kubelet[2515]: I0707 00:00:56.336404 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:56.336604 kubelet[2515]: I0707 00:00:56.336468 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:56.336604 kubelet[2515]: I0707 00:00:56.336498 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 7 00:00:56.336604 kubelet[2515]: I0707 00:00:56.336521 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:56.336604 kubelet[2515]: I0707 00:00:56.336536 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:56.336694 kubelet[2515]: I0707 00:00:56.336551 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/216aeeb361f165ee6a2af035428d359a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"216aeeb361f165ee6a2af035428d359a\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:56.336694 kubelet[2515]: I0707 00:00:56.336635 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/216aeeb361f165ee6a2af035428d359a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"216aeeb361f165ee6a2af035428d359a\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:56.336694 kubelet[2515]: I0707 00:00:56.336654 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/216aeeb361f165ee6a2af035428d359a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"216aeeb361f165ee6a2af035428d359a\") " pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:56.336795 kubelet[2515]: I0707 00:00:56.336700 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 00:00:56.459589 kubelet[2515]: E0707 00:00:56.458984 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:56.459589 kubelet[2515]: E0707 00:00:56.459054 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:56.459589 kubelet[2515]: E0707 00:00:56.459114 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:57.016201 kubelet[2515]: I0707 00:00:57.016157 2515 apiserver.go:52] "Watching apiserver" Jul 7 00:00:57.035363 kubelet[2515]: I0707 00:00:57.035297 2515 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 00:00:57.064389 kubelet[2515]: E0707 00:00:57.064120 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:57.064389 kubelet[2515]: I0707 00:00:57.064193 2515 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:57.064389 kubelet[2515]: E0707 00:00:57.064197 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:57.123819 kubelet[2515]: E0707 00:00:57.122814 2515 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 00:00:57.123819 kubelet[2515]: E0707 00:00:57.123000 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:57.133651 kubelet[2515]: I0707 00:00:57.133520 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.133489253 podStartE2EDuration="1.133489253s" podCreationTimestamp="2025-07-07 00:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:57.124807103 +0000 UTC m=+1.190408567" watchObservedRunningTime="2025-07-07 00:00:57.133489253 +0000 UTC m=+1.199090717" Jul 7 00:00:57.144369 kubelet[2515]: I0707 00:00:57.143972 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.1439198529999999 podStartE2EDuration="1.143919853s" podCreationTimestamp="2025-07-07 00:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:57.134232107 +0000 UTC m=+1.199833571" watchObservedRunningTime="2025-07-07 00:00:57.143919853 +0000 UTC m=+1.209521317" Jul 7 00:00:57.144369 kubelet[2515]: I0707 00:00:57.144097 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.144092946 podStartE2EDuration="1.144092946s" podCreationTimestamp="2025-07-07 00:00:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:00:57.143784314 +0000 UTC m=+1.209385778" watchObservedRunningTime="2025-07-07 00:00:57.144092946 +0000 UTC m=+1.209694410" Jul 7 00:00:58.065182 kubelet[2515]: E0707 00:00:58.065145 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:58.065182 kubelet[2515]: E0707 00:00:58.065185 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:00:59.990178 kubelet[2515]: I0707 00:00:59.990146 2515 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 00:00:59.990592 containerd[1465]: time="2025-07-07T00:00:59.990507223Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 00:00:59.990826 kubelet[2515]: I0707 00:00:59.990666 2515 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 00:01:01.107405 systemd[1]: Created slice kubepods-besteffort-podb8836ffe_271d_4bfc_af10_2a65b99ff85b.slice - libcontainer container kubepods-besteffort-podb8836ffe_271d_4bfc_af10_2a65b99ff85b.slice. Jul 7 00:01:01.169569 kubelet[2515]: I0707 00:01:01.169509 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8836ffe-271d-4bfc-af10-2a65b99ff85b-kube-proxy\") pod \"kube-proxy-98p58\" (UID: \"b8836ffe-271d-4bfc-af10-2a65b99ff85b\") " pod="kube-system/kube-proxy-98p58" Jul 7 00:01:01.169569 kubelet[2515]: I0707 00:01:01.169555 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h547z\" (UniqueName: \"kubernetes.io/projected/b8836ffe-271d-4bfc-af10-2a65b99ff85b-kube-api-access-h547z\") pod \"kube-proxy-98p58\" (UID: \"b8836ffe-271d-4bfc-af10-2a65b99ff85b\") " pod="kube-system/kube-proxy-98p58" Jul 7 00:01:01.169569 kubelet[2515]: I0707 00:01:01.169576 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8836ffe-271d-4bfc-af10-2a65b99ff85b-xtables-lock\") pod \"kube-proxy-98p58\" (UID: \"b8836ffe-271d-4bfc-af10-2a65b99ff85b\") " pod="kube-system/kube-proxy-98p58" Jul 7 00:01:01.170123 kubelet[2515]: I0707 00:01:01.169590 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8836ffe-271d-4bfc-af10-2a65b99ff85b-lib-modules\") pod \"kube-proxy-98p58\" (UID: \"b8836ffe-271d-4bfc-af10-2a65b99ff85b\") " pod="kube-system/kube-proxy-98p58" Jul 7 00:01:01.273021 systemd[1]: Created slice kubepods-besteffort-pod79848297_23a9_489a_895a_1fb4e2164a51.slice - libcontainer container kubepods-besteffort-pod79848297_23a9_489a_895a_1fb4e2164a51.slice. Jul 7 00:01:01.371199 kubelet[2515]: I0707 00:01:01.371039 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7r6j\" (UniqueName: \"kubernetes.io/projected/79848297-23a9-489a-895a-1fb4e2164a51-kube-api-access-g7r6j\") pod \"tigera-operator-747864d56d-s8kdq\" (UID: \"79848297-23a9-489a-895a-1fb4e2164a51\") " pod="tigera-operator/tigera-operator-747864d56d-s8kdq" Jul 7 00:01:01.371199 kubelet[2515]: I0707 00:01:01.371113 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/79848297-23a9-489a-895a-1fb4e2164a51-var-lib-calico\") pod \"tigera-operator-747864d56d-s8kdq\" (UID: \"79848297-23a9-489a-895a-1fb4e2164a51\") " pod="tigera-operator/tigera-operator-747864d56d-s8kdq" Jul 7 00:01:01.417529 kubelet[2515]: E0707 00:01:01.417466 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:01.418202 containerd[1465]: time="2025-07-07T00:01:01.418159860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98p58,Uid:b8836ffe-271d-4bfc-af10-2a65b99ff85b,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:01.444181 containerd[1465]: time="2025-07-07T00:01:01.443345602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:01.444181 containerd[1465]: time="2025-07-07T00:01:01.444122864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:01.444181 containerd[1465]: time="2025-07-07T00:01:01.444141770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:01.444366 containerd[1465]: time="2025-07-07T00:01:01.444257852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:01.468714 kubelet[2515]: E0707 00:01:01.468678 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:01.470491 systemd[1]: Started cri-containerd-d7b69a1d68a5e7e3a1bdafd9bb91fb7a0df6dedac004b4a1ca4472d7cb671baf.scope - libcontainer container d7b69a1d68a5e7e3a1bdafd9bb91fb7a0df6dedac004b4a1ca4472d7cb671baf. Jul 7 00:01:01.499898 containerd[1465]: time="2025-07-07T00:01:01.499856384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98p58,Uid:b8836ffe-271d-4bfc-af10-2a65b99ff85b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7b69a1d68a5e7e3a1bdafd9bb91fb7a0df6dedac004b4a1ca4472d7cb671baf\"" Jul 7 00:01:01.500833 kubelet[2515]: E0707 00:01:01.500812 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:01.507928 containerd[1465]: time="2025-07-07T00:01:01.507506173Z" level=info msg="CreateContainer within sandbox \"d7b69a1d68a5e7e3a1bdafd9bb91fb7a0df6dedac004b4a1ca4472d7cb671baf\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 00:01:01.527564 containerd[1465]: time="2025-07-07T00:01:01.527509552Z" level=info msg="CreateContainer within sandbox \"d7b69a1d68a5e7e3a1bdafd9bb91fb7a0df6dedac004b4a1ca4472d7cb671baf\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b70bc5a135cca19b0df76a25e880a7082f87c54344f95ccde23f2dc3576df55f\"" Jul 7 00:01:01.528289 containerd[1465]: time="2025-07-07T00:01:01.528155614Z" level=info msg="StartContainer for \"b70bc5a135cca19b0df76a25e880a7082f87c54344f95ccde23f2dc3576df55f\"" Jul 7 00:01:01.557470 systemd[1]: Started cri-containerd-b70bc5a135cca19b0df76a25e880a7082f87c54344f95ccde23f2dc3576df55f.scope - libcontainer container b70bc5a135cca19b0df76a25e880a7082f87c54344f95ccde23f2dc3576df55f. Jul 7 00:01:01.586788 containerd[1465]: time="2025-07-07T00:01:01.586436262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-s8kdq,Uid:79848297-23a9-489a-895a-1fb4e2164a51,Namespace:tigera-operator,Attempt:0,}" Jul 7 00:01:01.586788 containerd[1465]: time="2025-07-07T00:01:01.586454216Z" level=info msg="StartContainer for \"b70bc5a135cca19b0df76a25e880a7082f87c54344f95ccde23f2dc3576df55f\" returns successfully" Jul 7 00:01:01.611868 containerd[1465]: time="2025-07-07T00:01:01.611756590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:01.611868 containerd[1465]: time="2025-07-07T00:01:01.611823387Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:01.611868 containerd[1465]: time="2025-07-07T00:01:01.611838246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:01.613057 containerd[1465]: time="2025-07-07T00:01:01.612922384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:01.632521 systemd[1]: Started cri-containerd-a750562ecc794c182a54ab41e626b97efab790576f128e9601758dba3904209b.scope - libcontainer container a750562ecc794c182a54ab41e626b97efab790576f128e9601758dba3904209b. Jul 7 00:01:01.668007 containerd[1465]: time="2025-07-07T00:01:01.667952201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-s8kdq,Uid:79848297-23a9-489a-895a-1fb4e2164a51,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a750562ecc794c182a54ab41e626b97efab790576f128e9601758dba3904209b\"" Jul 7 00:01:01.669783 containerd[1465]: time="2025-07-07T00:01:01.669643717Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 00:01:02.073450 kubelet[2515]: E0707 00:01:02.073301 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:02.073708 kubelet[2515]: E0707 00:01:02.073653 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:02.469802 kubelet[2515]: I0707 00:01:02.469746 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-98p58" podStartSLOduration=1.469726263 podStartE2EDuration="1.469726263s" podCreationTimestamp="2025-07-07 00:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:01:02.469545568 +0000 UTC m=+6.535147032" watchObservedRunningTime="2025-07-07 00:01:02.469726263 +0000 UTC m=+6.535327727" Jul 7 00:01:03.510335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392715594.mount: Deactivated successfully. Jul 7 00:01:04.082467 kubelet[2515]: E0707 00:01:04.082380 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:04.412369 containerd[1465]: time="2025-07-07T00:01:04.412321118Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:04.413162 containerd[1465]: time="2025-07-07T00:01:04.413098336Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 00:01:04.414373 containerd[1465]: time="2025-07-07T00:01:04.414345989Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:04.416714 containerd[1465]: time="2025-07-07T00:01:04.416677882Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:04.417299 containerd[1465]: time="2025-07-07T00:01:04.417271190Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.747591595s" Jul 7 00:01:04.417355 containerd[1465]: time="2025-07-07T00:01:04.417298332Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 00:01:04.422994 containerd[1465]: time="2025-07-07T00:01:04.422957894Z" level=info msg="CreateContainer within sandbox \"a750562ecc794c182a54ab41e626b97efab790576f128e9601758dba3904209b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 00:01:04.435254 containerd[1465]: time="2025-07-07T00:01:04.435192126Z" level=info msg="CreateContainer within sandbox \"a750562ecc794c182a54ab41e626b97efab790576f128e9601758dba3904209b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"def82224512e9ca6d73b98b9d84927bdc341d6eabee9ff2ccf09f26554cce4cb\"" Jul 7 00:01:04.435800 containerd[1465]: time="2025-07-07T00:01:04.435772299Z" level=info msg="StartContainer for \"def82224512e9ca6d73b98b9d84927bdc341d6eabee9ff2ccf09f26554cce4cb\"" Jul 7 00:01:04.465433 systemd[1]: Started cri-containerd-def82224512e9ca6d73b98b9d84927bdc341d6eabee9ff2ccf09f26554cce4cb.scope - libcontainer container def82224512e9ca6d73b98b9d84927bdc341d6eabee9ff2ccf09f26554cce4cb. Jul 7 00:01:04.489585 containerd[1465]: time="2025-07-07T00:01:04.489484180Z" level=info msg="StartContainer for \"def82224512e9ca6d73b98b9d84927bdc341d6eabee9ff2ccf09f26554cce4cb\" returns successfully" Jul 7 00:01:05.079283 kubelet[2515]: E0707 00:01:05.079009 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:05.096348 kubelet[2515]: I0707 00:01:05.096265 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-s8kdq" podStartSLOduration=1.347351044 podStartE2EDuration="4.096245518s" podCreationTimestamp="2025-07-07 00:01:01 +0000 UTC" firstStartedPulling="2025-07-07 00:01:01.669249235 +0000 UTC m=+5.734850689" lastFinishedPulling="2025-07-07 00:01:04.418143699 +0000 UTC m=+8.483745163" observedRunningTime="2025-07-07 00:01:05.096188199 +0000 UTC m=+9.161789663" watchObservedRunningTime="2025-07-07 00:01:05.096245518 +0000 UTC m=+9.161846982" Jul 7 00:01:05.927663 kubelet[2515]: E0707 00:01:05.927009 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:06.082249 kubelet[2515]: E0707 00:01:06.082201 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:06.377620 update_engine[1457]: I20250707 00:01:06.377365 1457 update_attempter.cc:509] Updating boot flags... Jul 7 00:01:06.626163 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2878) Jul 7 00:01:07.083426 kubelet[2515]: E0707 00:01:07.083382 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:09.867836 sudo[1646]: pam_unix(sudo:session): session closed for user root Jul 7 00:01:09.870456 sshd[1643]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:09.874535 systemd[1]: sshd@6-10.0.0.146:22-10.0.0.1:37288.service: Deactivated successfully. Jul 7 00:01:09.877764 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 00:01:09.878085 systemd[1]: session-7.scope: Consumed 5.464s CPU time, 160.2M memory peak, 0B memory swap peak. Jul 7 00:01:09.880414 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Jul 7 00:01:09.884786 systemd-logind[1455]: Removed session 7. Jul 7 00:01:12.519511 systemd[1]: Created slice kubepods-besteffort-podc279f017_4d72_42eb_9821_7a32226eeccb.slice - libcontainer container kubepods-besteffort-podc279f017_4d72_42eb_9821_7a32226eeccb.slice. Jul 7 00:01:12.538892 kubelet[2515]: I0707 00:01:12.538823 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c279f017-4d72-42eb-9821-7a32226eeccb-tigera-ca-bundle\") pod \"calico-typha-65b4bc797-hc7tz\" (UID: \"c279f017-4d72-42eb-9821-7a32226eeccb\") " pod="calico-system/calico-typha-65b4bc797-hc7tz" Jul 7 00:01:12.538892 kubelet[2515]: I0707 00:01:12.538870 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/c279f017-4d72-42eb-9821-7a32226eeccb-typha-certs\") pod \"calico-typha-65b4bc797-hc7tz\" (UID: \"c279f017-4d72-42eb-9821-7a32226eeccb\") " pod="calico-system/calico-typha-65b4bc797-hc7tz" Jul 7 00:01:12.538892 kubelet[2515]: I0707 00:01:12.538897 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ccnd\" (UniqueName: \"kubernetes.io/projected/c279f017-4d72-42eb-9821-7a32226eeccb-kube-api-access-2ccnd\") pod \"calico-typha-65b4bc797-hc7tz\" (UID: \"c279f017-4d72-42eb-9821-7a32226eeccb\") " pod="calico-system/calico-typha-65b4bc797-hc7tz" Jul 7 00:01:12.824509 kubelet[2515]: E0707 00:01:12.824063 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:12.824681 containerd[1465]: time="2025-07-07T00:01:12.824643092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65b4bc797-hc7tz,Uid:c279f017-4d72-42eb-9821-7a32226eeccb,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:12.850478 containerd[1465]: time="2025-07-07T00:01:12.850359286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:12.850478 containerd[1465]: time="2025-07-07T00:01:12.850440890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:12.851427 containerd[1465]: time="2025-07-07T00:01:12.851216246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:12.851427 containerd[1465]: time="2025-07-07T00:01:12.851348376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:12.871579 systemd[1]: Started cri-containerd-528a94965d00088d323979ece98bb77dcb7ef41da4863b98ff69d4e8463addd3.scope - libcontainer container 528a94965d00088d323979ece98bb77dcb7ef41da4863b98ff69d4e8463addd3. Jul 7 00:01:12.915026 systemd[1]: Created slice kubepods-besteffort-pod09ad0233_f60e_479c_9c90_a0e7df1e545d.slice - libcontainer container kubepods-besteffort-pod09ad0233_f60e_479c_9c90_a0e7df1e545d.slice. Jul 7 00:01:12.925293 containerd[1465]: time="2025-07-07T00:01:12.925184414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65b4bc797-hc7tz,Uid:c279f017-4d72-42eb-9821-7a32226eeccb,Namespace:calico-system,Attempt:0,} returns sandbox id \"528a94965d00088d323979ece98bb77dcb7ef41da4863b98ff69d4e8463addd3\"" Jul 7 00:01:12.934464 kubelet[2515]: E0707 00:01:12.933980 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:12.938346 containerd[1465]: time="2025-07-07T00:01:12.938291967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 00:01:12.941821 kubelet[2515]: I0707 00:01:12.941793 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/09ad0233-f60e-479c-9c90-a0e7df1e545d-var-lib-calico\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:12.941974 kubelet[2515]: I0707 00:01:12.941827 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/09ad0233-f60e-479c-9c90-a0e7df1e545d-var-run-calico\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:12.941974 kubelet[2515]: I0707 00:01:12.941845 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/09ad0233-f60e-479c-9c90-a0e7df1e545d-flexvol-driver-host\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:12.941974 kubelet[2515]: I0707 00:01:12.941890 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/09ad0233-f60e-479c-9c90-a0e7df1e545d-xtables-lock\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:12.941974 kubelet[2515]: I0707 00:01:12.941907 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/09ad0233-f60e-479c-9c90-a0e7df1e545d-cni-log-dir\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:12.941974 kubelet[2515]: I0707 00:01:12.941921 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/09ad0233-f60e-479c-9c90-a0e7df1e545d-lib-modules\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:12.942108 kubelet[2515]: I0707 00:01:12.941981 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ad0233-f60e-479c-9c90-a0e7df1e545d-tigera-ca-bundle\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:12.942108 kubelet[2515]: I0707 00:01:12.942005 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzs69\" (UniqueName: \"kubernetes.io/projected/09ad0233-f60e-479c-9c90-a0e7df1e545d-kube-api-access-gzs69\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:12.942108 kubelet[2515]: I0707 00:01:12.942040 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/09ad0233-f60e-479c-9c90-a0e7df1e545d-cni-bin-dir\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:12.942108 kubelet[2515]: I0707 00:01:12.942055 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/09ad0233-f60e-479c-9c90-a0e7df1e545d-node-certs\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:12.942108 kubelet[2515]: I0707 00:01:12.942075 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/09ad0233-f60e-479c-9c90-a0e7df1e545d-cni-net-dir\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:12.942224 kubelet[2515]: I0707 00:01:12.942088 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/09ad0233-f60e-479c-9c90-a0e7df1e545d-policysync\") pod \"calico-node-lk8cd\" (UID: \"09ad0233-f60e-479c-9c90-a0e7df1e545d\") " pod="calico-system/calico-node-lk8cd" Jul 7 00:01:13.046823 kubelet[2515]: E0707 00:01:13.046790 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.046823 kubelet[2515]: W0707 00:01:13.046812 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.046963 kubelet[2515]: E0707 00:01:13.046846 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.050778 kubelet[2515]: E0707 00:01:13.050751 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.050778 kubelet[2515]: W0707 00:01:13.050768 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.050845 kubelet[2515]: E0707 00:01:13.050781 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.149665 kubelet[2515]: E0707 00:01:13.149620 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4s7sb" podUID="2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6" Jul 7 00:01:13.223115 containerd[1465]: time="2025-07-07T00:01:13.223066277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lk8cd,Uid:09ad0233-f60e-479c-9c90-a0e7df1e545d,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:13.225629 kubelet[2515]: E0707 00:01:13.225600 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.225629 kubelet[2515]: W0707 00:01:13.225619 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.225720 kubelet[2515]: E0707 00:01:13.225639 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.225952 kubelet[2515]: E0707 00:01:13.225929 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.225952 kubelet[2515]: W0707 00:01:13.225940 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.226020 kubelet[2515]: E0707 00:01:13.225953 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.226195 kubelet[2515]: E0707 00:01:13.226167 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.226195 kubelet[2515]: W0707 00:01:13.226179 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.226195 kubelet[2515]: E0707 00:01:13.226189 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.226529 kubelet[2515]: E0707 00:01:13.226514 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.226529 kubelet[2515]: W0707 00:01:13.226525 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.226596 kubelet[2515]: E0707 00:01:13.226533 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.226811 kubelet[2515]: E0707 00:01:13.226785 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.226811 kubelet[2515]: W0707 00:01:13.226797 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.226811 kubelet[2515]: E0707 00:01:13.226807 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.227114 kubelet[2515]: E0707 00:01:13.227085 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.227150 kubelet[2515]: W0707 00:01:13.227114 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.227177 kubelet[2515]: E0707 00:01:13.227149 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.227448 kubelet[2515]: E0707 00:01:13.227432 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.227448 kubelet[2515]: W0707 00:01:13.227443 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.227511 kubelet[2515]: E0707 00:01:13.227452 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.227683 kubelet[2515]: E0707 00:01:13.227664 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.227683 kubelet[2515]: W0707 00:01:13.227677 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.227763 kubelet[2515]: E0707 00:01:13.227690 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.227988 kubelet[2515]: E0707 00:01:13.227947 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.227988 kubelet[2515]: W0707 00:01:13.227981 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.228062 kubelet[2515]: E0707 00:01:13.227993 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.228239 kubelet[2515]: E0707 00:01:13.228214 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.228239 kubelet[2515]: W0707 00:01:13.228227 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.228239 kubelet[2515]: E0707 00:01:13.228236 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.228490 kubelet[2515]: E0707 00:01:13.228464 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.228490 kubelet[2515]: W0707 00:01:13.228487 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.228537 kubelet[2515]: E0707 00:01:13.228498 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.228730 kubelet[2515]: E0707 00:01:13.228713 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.228730 kubelet[2515]: W0707 00:01:13.228724 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.228794 kubelet[2515]: E0707 00:01:13.228733 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.228931 kubelet[2515]: E0707 00:01:13.228914 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.228965 kubelet[2515]: W0707 00:01:13.228926 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.228965 kubelet[2515]: E0707 00:01:13.228947 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.229237 kubelet[2515]: E0707 00:01:13.229205 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.229237 kubelet[2515]: W0707 00:01:13.229232 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.229353 kubelet[2515]: E0707 00:01:13.229265 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.229556 kubelet[2515]: E0707 00:01:13.229541 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.229556 kubelet[2515]: W0707 00:01:13.229552 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.229602 kubelet[2515]: E0707 00:01:13.229561 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.229768 kubelet[2515]: E0707 00:01:13.229754 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.229768 kubelet[2515]: W0707 00:01:13.229764 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.229818 kubelet[2515]: E0707 00:01:13.229772 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.229995 kubelet[2515]: E0707 00:01:13.229980 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.229995 kubelet[2515]: W0707 00:01:13.229990 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.230059 kubelet[2515]: E0707 00:01:13.229998 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.230196 kubelet[2515]: E0707 00:01:13.230182 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.230196 kubelet[2515]: W0707 00:01:13.230192 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.230242 kubelet[2515]: E0707 00:01:13.230201 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.230433 kubelet[2515]: E0707 00:01:13.230416 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.230433 kubelet[2515]: W0707 00:01:13.230426 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.230433 kubelet[2515]: E0707 00:01:13.230434 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.230629 kubelet[2515]: E0707 00:01:13.230613 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.230629 kubelet[2515]: W0707 00:01:13.230623 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.230629 kubelet[2515]: E0707 00:01:13.230630 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.244625 kubelet[2515]: E0707 00:01:13.244594 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.244625 kubelet[2515]: W0707 00:01:13.244614 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.244625 kubelet[2515]: E0707 00:01:13.244633 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.244809 kubelet[2515]: I0707 00:01:13.244670 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6-registration-dir\") pod \"csi-node-driver-4s7sb\" (UID: \"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6\") " pod="calico-system/csi-node-driver-4s7sb" Jul 7 00:01:13.244943 kubelet[2515]: E0707 00:01:13.244925 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.244943 kubelet[2515]: W0707 00:01:13.244936 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.244998 kubelet[2515]: E0707 00:01:13.244945 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.244998 kubelet[2515]: I0707 00:01:13.244971 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6-varrun\") pod \"csi-node-driver-4s7sb\" (UID: \"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6\") " pod="calico-system/csi-node-driver-4s7sb" Jul 7 00:01:13.245266 kubelet[2515]: E0707 00:01:13.245240 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.245266 kubelet[2515]: W0707 00:01:13.245250 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.245266 kubelet[2515]: E0707 00:01:13.245258 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.245417 kubelet[2515]: I0707 00:01:13.245279 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6-kubelet-dir\") pod \"csi-node-driver-4s7sb\" (UID: \"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6\") " pod="calico-system/csi-node-driver-4s7sb" Jul 7 00:01:13.245537 kubelet[2515]: E0707 00:01:13.245520 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.245537 kubelet[2515]: W0707 00:01:13.245530 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.245587 kubelet[2515]: E0707 00:01:13.245538 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.245587 kubelet[2515]: I0707 00:01:13.245562 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6-socket-dir\") pod \"csi-node-driver-4s7sb\" (UID: \"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6\") " pod="calico-system/csi-node-driver-4s7sb" Jul 7 00:01:13.245796 kubelet[2515]: E0707 00:01:13.245784 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.245796 kubelet[2515]: W0707 00:01:13.245794 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.245860 kubelet[2515]: E0707 00:01:13.245801 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.245860 kubelet[2515]: I0707 00:01:13.245825 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgh4\" (UniqueName: \"kubernetes.io/projected/2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6-kube-api-access-rqgh4\") pod \"csi-node-driver-4s7sb\" (UID: \"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6\") " pod="calico-system/csi-node-driver-4s7sb" Jul 7 00:01:13.246057 kubelet[2515]: E0707 00:01:13.246045 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.246057 kubelet[2515]: W0707 00:01:13.246054 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.246128 kubelet[2515]: E0707 00:01:13.246063 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.246284 kubelet[2515]: E0707 00:01:13.246273 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.246284 kubelet[2515]: W0707 00:01:13.246282 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.246360 kubelet[2515]: E0707 00:01:13.246292 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.246509 kubelet[2515]: E0707 00:01:13.246498 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.246509 kubelet[2515]: W0707 00:01:13.246507 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.246570 kubelet[2515]: E0707 00:01:13.246515 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.246733 kubelet[2515]: E0707 00:01:13.246722 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.246733 kubelet[2515]: W0707 00:01:13.246731 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.246860 kubelet[2515]: E0707 00:01:13.246740 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.246937 kubelet[2515]: E0707 00:01:13.246916 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.246937 kubelet[2515]: W0707 00:01:13.246926 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.246937 kubelet[2515]: E0707 00:01:13.246934 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.247228 kubelet[2515]: E0707 00:01:13.247207 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.247228 kubelet[2515]: W0707 00:01:13.247219 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.247228 kubelet[2515]: E0707 00:01:13.247231 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.247512 kubelet[2515]: E0707 00:01:13.247492 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.247512 kubelet[2515]: W0707 00:01:13.247507 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.247582 kubelet[2515]: E0707 00:01:13.247519 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.248096 kubelet[2515]: E0707 00:01:13.248072 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.248096 kubelet[2515]: W0707 00:01:13.248085 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.248096 kubelet[2515]: E0707 00:01:13.248095 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.248353 kubelet[2515]: E0707 00:01:13.248338 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.248353 kubelet[2515]: W0707 00:01:13.248350 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.248419 kubelet[2515]: E0707 00:01:13.248361 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.248632 kubelet[2515]: E0707 00:01:13.248615 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.248632 kubelet[2515]: W0707 00:01:13.248630 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.248691 kubelet[2515]: E0707 00:01:13.248642 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.249391 containerd[1465]: time="2025-07-07T00:01:13.249284364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:13.249391 containerd[1465]: time="2025-07-07T00:01:13.249352733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:13.249391 containerd[1465]: time="2025-07-07T00:01:13.249365227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:13.249541 containerd[1465]: time="2025-07-07T00:01:13.249457280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:13.270438 systemd[1]: Started cri-containerd-ffb70262513fc4ba635e9292bed0f867cde14e6a6b0cd5299bb80a52b03091a1.scope - libcontainer container ffb70262513fc4ba635e9292bed0f867cde14e6a6b0cd5299bb80a52b03091a1. Jul 7 00:01:13.314334 containerd[1465]: time="2025-07-07T00:01:13.310975328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-lk8cd,Uid:09ad0233-f60e-479c-9c90-a0e7df1e545d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ffb70262513fc4ba635e9292bed0f867cde14e6a6b0cd5299bb80a52b03091a1\"" Jul 7 00:01:13.347136 kubelet[2515]: E0707 00:01:13.347071 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.347136 kubelet[2515]: W0707 00:01:13.347092 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.347136 kubelet[2515]: E0707 00:01:13.347112 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.347572 kubelet[2515]: E0707 00:01:13.347322 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.347572 kubelet[2515]: W0707 00:01:13.347330 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.347572 kubelet[2515]: E0707 00:01:13.347338 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.347756 kubelet[2515]: E0707 00:01:13.347644 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.347756 kubelet[2515]: W0707 00:01:13.347652 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.347756 kubelet[2515]: E0707 00:01:13.347661 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.348230 kubelet[2515]: E0707 00:01:13.347954 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.348283 kubelet[2515]: W0707 00:01:13.348228 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.348283 kubelet[2515]: E0707 00:01:13.348250 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.348502 kubelet[2515]: E0707 00:01:13.348486 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.348502 kubelet[2515]: W0707 00:01:13.348499 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.348502 kubelet[2515]: E0707 00:01:13.348510 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.348756 kubelet[2515]: E0707 00:01:13.348738 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.348756 kubelet[2515]: W0707 00:01:13.348751 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.348837 kubelet[2515]: E0707 00:01:13.348761 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.349000 kubelet[2515]: E0707 00:01:13.348984 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.349000 kubelet[2515]: W0707 00:01:13.348996 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.349070 kubelet[2515]: E0707 00:01:13.349015 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.349241 kubelet[2515]: E0707 00:01:13.349221 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.349241 kubelet[2515]: W0707 00:01:13.349234 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.349302 kubelet[2515]: E0707 00:01:13.349245 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.349487 kubelet[2515]: E0707 00:01:13.349470 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.349487 kubelet[2515]: W0707 00:01:13.349481 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.349545 kubelet[2515]: E0707 00:01:13.349490 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.351001 kubelet[2515]: E0707 00:01:13.350974 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.351001 kubelet[2515]: W0707 00:01:13.350988 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.351001 kubelet[2515]: E0707 00:01:13.350998 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.351250 kubelet[2515]: E0707 00:01:13.351229 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.351250 kubelet[2515]: W0707 00:01:13.351245 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.351322 kubelet[2515]: E0707 00:01:13.351253 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.351498 kubelet[2515]: E0707 00:01:13.351483 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.351498 kubelet[2515]: W0707 00:01:13.351493 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.351498 kubelet[2515]: E0707 00:01:13.351501 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.351743 kubelet[2515]: E0707 00:01:13.351725 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.351743 kubelet[2515]: W0707 00:01:13.351738 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.351794 kubelet[2515]: E0707 00:01:13.351750 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.351983 kubelet[2515]: E0707 00:01:13.351959 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.351983 kubelet[2515]: W0707 00:01:13.351971 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.351983 kubelet[2515]: E0707 00:01:13.351982 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.352218 kubelet[2515]: E0707 00:01:13.352205 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.352245 kubelet[2515]: W0707 00:01:13.352216 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.352245 kubelet[2515]: E0707 00:01:13.352227 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.352481 kubelet[2515]: E0707 00:01:13.352461 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.352481 kubelet[2515]: W0707 00:01:13.352471 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.352481 kubelet[2515]: E0707 00:01:13.352480 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.352697 kubelet[2515]: E0707 00:01:13.352680 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.352697 kubelet[2515]: W0707 00:01:13.352692 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.352751 kubelet[2515]: E0707 00:01:13.352712 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.352945 kubelet[2515]: E0707 00:01:13.352932 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.352945 kubelet[2515]: W0707 00:01:13.352942 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.353011 kubelet[2515]: E0707 00:01:13.352950 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.353164 kubelet[2515]: E0707 00:01:13.353152 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.353164 kubelet[2515]: W0707 00:01:13.353162 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.353202 kubelet[2515]: E0707 00:01:13.353172 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.353466 kubelet[2515]: E0707 00:01:13.353416 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.353466 kubelet[2515]: W0707 00:01:13.353429 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.353466 kubelet[2515]: E0707 00:01:13.353440 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.353674 kubelet[2515]: E0707 00:01:13.353656 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.353674 kubelet[2515]: W0707 00:01:13.353667 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.353674 kubelet[2515]: E0707 00:01:13.353675 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.353909 kubelet[2515]: E0707 00:01:13.353894 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.353909 kubelet[2515]: W0707 00:01:13.353907 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.353991 kubelet[2515]: E0707 00:01:13.353916 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.354171 kubelet[2515]: E0707 00:01:13.354156 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.354171 kubelet[2515]: W0707 00:01:13.354166 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.354226 kubelet[2515]: E0707 00:01:13.354175 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.354554 kubelet[2515]: E0707 00:01:13.354532 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.354601 kubelet[2515]: W0707 00:01:13.354563 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.354601 kubelet[2515]: E0707 00:01:13.354575 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.354840 kubelet[2515]: E0707 00:01:13.354814 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.354840 kubelet[2515]: W0707 00:01:13.354828 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.354890 kubelet[2515]: E0707 00:01:13.354837 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.362141 kubelet[2515]: E0707 00:01:13.362105 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:13.362283 kubelet[2515]: W0707 00:01:13.362271 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:13.362393 kubelet[2515]: E0707 00:01:13.362363 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:13.654693 systemd[1]: run-containerd-runc-k8s.io-528a94965d00088d323979ece98bb77dcb7ef41da4863b98ff69d4e8463addd3-runc.ob1FYs.mount: Deactivated successfully. Jul 7 00:01:14.514657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount396785532.mount: Deactivated successfully. Jul 7 00:01:14.857210 containerd[1465]: time="2025-07-07T00:01:14.857153527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:14.858036 containerd[1465]: time="2025-07-07T00:01:14.857970710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 00:01:14.859297 containerd[1465]: time="2025-07-07T00:01:14.859270706Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:14.861277 containerd[1465]: time="2025-07-07T00:01:14.861245126Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:14.861901 containerd[1465]: time="2025-07-07T00:01:14.861858284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.923506894s" Jul 7 00:01:14.861927 containerd[1465]: time="2025-07-07T00:01:14.861901676Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 00:01:14.862912 containerd[1465]: time="2025-07-07T00:01:14.862892077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 00:01:14.887489 containerd[1465]: time="2025-07-07T00:01:14.887449553Z" level=info msg="CreateContainer within sandbox \"528a94965d00088d323979ece98bb77dcb7ef41da4863b98ff69d4e8463addd3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 00:01:14.903298 containerd[1465]: time="2025-07-07T00:01:14.903262922Z" level=info msg="CreateContainer within sandbox \"528a94965d00088d323979ece98bb77dcb7ef41da4863b98ff69d4e8463addd3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6e0420ae6dc2938f83bb12b624d78f80bf4822c1639fe56aff7457370da6fc0e\"" Jul 7 00:01:14.905415 containerd[1465]: time="2025-07-07T00:01:14.905370793Z" level=info msg="StartContainer for \"6e0420ae6dc2938f83bb12b624d78f80bf4822c1639fe56aff7457370da6fc0e\"" Jul 7 00:01:14.939451 systemd[1]: Started cri-containerd-6e0420ae6dc2938f83bb12b624d78f80bf4822c1639fe56aff7457370da6fc0e.scope - libcontainer container 6e0420ae6dc2938f83bb12b624d78f80bf4822c1639fe56aff7457370da6fc0e. Jul 7 00:01:14.979941 containerd[1465]: time="2025-07-07T00:01:14.979887897Z" level=info msg="StartContainer for \"6e0420ae6dc2938f83bb12b624d78f80bf4822c1639fe56aff7457370da6fc0e\" returns successfully" Jul 7 00:01:15.057915 kubelet[2515]: E0707 00:01:15.057227 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4s7sb" podUID="2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6" Jul 7 00:01:15.109338 kubelet[2515]: E0707 00:01:15.108777 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:15.141270 kubelet[2515]: I0707 00:01:15.141190 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-65b4bc797-hc7tz" podStartSLOduration=1.213149652 podStartE2EDuration="3.141154409s" podCreationTimestamp="2025-07-07 00:01:12 +0000 UTC" firstStartedPulling="2025-07-07 00:01:12.934678135 +0000 UTC m=+17.000279599" lastFinishedPulling="2025-07-07 00:01:14.862682892 +0000 UTC m=+18.928284356" observedRunningTime="2025-07-07 00:01:15.134579916 +0000 UTC m=+19.200181380" watchObservedRunningTime="2025-07-07 00:01:15.141154409 +0000 UTC m=+19.206755863" Jul 7 00:01:15.143200 kubelet[2515]: E0707 00:01:15.143174 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.143200 kubelet[2515]: W0707 00:01:15.143193 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.147525 kubelet[2515]: E0707 00:01:15.147495 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.147991 kubelet[2515]: E0707 00:01:15.147895 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.148031 kubelet[2515]: W0707 00:01:15.147991 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.148054 kubelet[2515]: E0707 00:01:15.148040 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.148392 kubelet[2515]: E0707 00:01:15.148368 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.148392 kubelet[2515]: W0707 00:01:15.148384 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.148392 kubelet[2515]: E0707 00:01:15.148393 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.148658 kubelet[2515]: E0707 00:01:15.148635 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.148658 kubelet[2515]: W0707 00:01:15.148650 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.148713 kubelet[2515]: E0707 00:01:15.148660 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.148899 kubelet[2515]: E0707 00:01:15.148877 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.148899 kubelet[2515]: W0707 00:01:15.148891 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.148899 kubelet[2515]: E0707 00:01:15.148900 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.150242 kubelet[2515]: E0707 00:01:15.149094 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.150242 kubelet[2515]: W0707 00:01:15.149106 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.150242 kubelet[2515]: E0707 00:01:15.149115 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.150242 kubelet[2515]: E0707 00:01:15.149295 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.150242 kubelet[2515]: W0707 00:01:15.149316 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.150242 kubelet[2515]: E0707 00:01:15.149328 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.150242 kubelet[2515]: E0707 00:01:15.149597 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.150242 kubelet[2515]: W0707 00:01:15.149605 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.150242 kubelet[2515]: E0707 00:01:15.149615 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.150242 kubelet[2515]: E0707 00:01:15.149818 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.150561 kubelet[2515]: W0707 00:01:15.149826 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.150561 kubelet[2515]: E0707 00:01:15.149835 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.150561 kubelet[2515]: E0707 00:01:15.150036 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.150561 kubelet[2515]: W0707 00:01:15.150046 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.150561 kubelet[2515]: E0707 00:01:15.150055 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.150561 kubelet[2515]: E0707 00:01:15.150243 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.150561 kubelet[2515]: W0707 00:01:15.150252 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.150561 kubelet[2515]: E0707 00:01:15.150264 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.150735 kubelet[2515]: E0707 00:01:15.150570 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.150735 kubelet[2515]: W0707 00:01:15.150581 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.150735 kubelet[2515]: E0707 00:01:15.150593 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.150827 kubelet[2515]: E0707 00:01:15.150810 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.150827 kubelet[2515]: W0707 00:01:15.150824 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.150875 kubelet[2515]: E0707 00:01:15.150834 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.151090 kubelet[2515]: E0707 00:01:15.151064 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.151090 kubelet[2515]: W0707 00:01:15.151076 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.151090 kubelet[2515]: E0707 00:01:15.151086 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.151371 kubelet[2515]: E0707 00:01:15.151354 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.151371 kubelet[2515]: W0707 00:01:15.151368 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.151426 kubelet[2515]: E0707 00:01:15.151380 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.162190 kubelet[2515]: E0707 00:01:15.162135 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.162190 kubelet[2515]: W0707 00:01:15.162168 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.162286 kubelet[2515]: E0707 00:01:15.162195 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.162690 kubelet[2515]: E0707 00:01:15.162651 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.162733 kubelet[2515]: W0707 00:01:15.162682 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.162781 kubelet[2515]: E0707 00:01:15.162739 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.170628 kubelet[2515]: E0707 00:01:15.170568 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.170628 kubelet[2515]: W0707 00:01:15.170600 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.170628 kubelet[2515]: E0707 00:01:15.170628 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.170956 kubelet[2515]: E0707 00:01:15.170931 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.170956 kubelet[2515]: W0707 00:01:15.170948 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.171041 kubelet[2515]: E0707 00:01:15.170960 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.171785 kubelet[2515]: E0707 00:01:15.171185 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.171785 kubelet[2515]: W0707 00:01:15.171197 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.171785 kubelet[2515]: E0707 00:01:15.171205 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.171785 kubelet[2515]: E0707 00:01:15.171715 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.171785 kubelet[2515]: W0707 00:01:15.171723 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.171785 kubelet[2515]: E0707 00:01:15.171731 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.171971 kubelet[2515]: E0707 00:01:15.171949 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.171971 kubelet[2515]: W0707 00:01:15.171964 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.171971 kubelet[2515]: E0707 00:01:15.171972 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.172360 kubelet[2515]: E0707 00:01:15.172297 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.172360 kubelet[2515]: W0707 00:01:15.172352 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.172458 kubelet[2515]: E0707 00:01:15.172379 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.172898 kubelet[2515]: E0707 00:01:15.172878 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.172898 kubelet[2515]: W0707 00:01:15.172893 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.172956 kubelet[2515]: E0707 00:01:15.172904 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.173702 kubelet[2515]: E0707 00:01:15.173675 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.173702 kubelet[2515]: W0707 00:01:15.173692 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.173702 kubelet[2515]: E0707 00:01:15.173704 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.174099 kubelet[2515]: E0707 00:01:15.174074 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.174099 kubelet[2515]: W0707 00:01:15.174090 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.174099 kubelet[2515]: E0707 00:01:15.174100 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.174387 kubelet[2515]: E0707 00:01:15.174361 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.174387 kubelet[2515]: W0707 00:01:15.174377 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.174387 kubelet[2515]: E0707 00:01:15.174386 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.175909 kubelet[2515]: E0707 00:01:15.175884 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.175909 kubelet[2515]: W0707 00:01:15.175904 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.176062 kubelet[2515]: E0707 00:01:15.175914 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.176207 kubelet[2515]: E0707 00:01:15.176144 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.176207 kubelet[2515]: W0707 00:01:15.176157 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.176207 kubelet[2515]: E0707 00:01:15.176166 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.176642 kubelet[2515]: E0707 00:01:15.176606 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.176642 kubelet[2515]: W0707 00:01:15.176627 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.176642 kubelet[2515]: E0707 00:01:15.176636 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.176889 kubelet[2515]: E0707 00:01:15.176869 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.176889 kubelet[2515]: W0707 00:01:15.176883 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.176889 kubelet[2515]: E0707 00:01:15.176891 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.177482 kubelet[2515]: E0707 00:01:15.177424 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.177482 kubelet[2515]: W0707 00:01:15.177439 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.177482 kubelet[2515]: E0707 00:01:15.177457 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:15.177851 kubelet[2515]: E0707 00:01:15.177796 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:15.177851 kubelet[2515]: W0707 00:01:15.177835 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:15.177923 kubelet[2515]: E0707 00:01:15.177872 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.054039 containerd[1465]: time="2025-07-07T00:01:16.053980659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:16.054789 containerd[1465]: time="2025-07-07T00:01:16.054738950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 00:01:16.055828 containerd[1465]: time="2025-07-07T00:01:16.055792519Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:16.058029 containerd[1465]: time="2025-07-07T00:01:16.057993302Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:16.058837 containerd[1465]: time="2025-07-07T00:01:16.058782452Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.195860808s" Jul 7 00:01:16.058879 containerd[1465]: time="2025-07-07T00:01:16.058836753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 00:01:16.067762 containerd[1465]: time="2025-07-07T00:01:16.067680774Z" level=info msg="CreateContainer within sandbox \"ffb70262513fc4ba635e9292bed0f867cde14e6a6b0cd5299bb80a52b03091a1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 00:01:16.087823 containerd[1465]: time="2025-07-07T00:01:16.087762873Z" level=info msg="CreateContainer within sandbox \"ffb70262513fc4ba635e9292bed0f867cde14e6a6b0cd5299bb80a52b03091a1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0b83be64d600ddaced54f46f89008d546a3a5e93de227f44bcc017e2df88b1e1\"" Jul 7 00:01:16.088382 containerd[1465]: time="2025-07-07T00:01:16.088349280Z" level=info msg="StartContainer for \"0b83be64d600ddaced54f46f89008d546a3a5e93de227f44bcc017e2df88b1e1\"" Jul 7 00:01:16.118366 kubelet[2515]: I0707 00:01:16.118234 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:01:16.120608 systemd[1]: Started cri-containerd-0b83be64d600ddaced54f46f89008d546a3a5e93de227f44bcc017e2df88b1e1.scope - libcontainer container 0b83be64d600ddaced54f46f89008d546a3a5e93de227f44bcc017e2df88b1e1. Jul 7 00:01:16.123046 kubelet[2515]: E0707 00:01:16.122961 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:16.156407 kubelet[2515]: E0707 00:01:16.156364 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.156407 kubelet[2515]: W0707 00:01:16.156383 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.156407 kubelet[2515]: E0707 00:01:16.156404 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.156663 kubelet[2515]: E0707 00:01:16.156645 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.156663 kubelet[2515]: W0707 00:01:16.156659 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.156786 kubelet[2515]: E0707 00:01:16.156672 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.156935 kubelet[2515]: E0707 00:01:16.156913 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.156935 kubelet[2515]: W0707 00:01:16.156926 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.156935 kubelet[2515]: E0707 00:01:16.156935 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.157290 kubelet[2515]: E0707 00:01:16.157274 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.157290 kubelet[2515]: W0707 00:01:16.157286 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.157433 kubelet[2515]: E0707 00:01:16.157296 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.157602 kubelet[2515]: E0707 00:01:16.157576 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.157602 kubelet[2515]: W0707 00:01:16.157589 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.157845 kubelet[2515]: E0707 00:01:16.157612 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.157881 kubelet[2515]: E0707 00:01:16.157865 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.157881 kubelet[2515]: W0707 00:01:16.157874 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.157939 kubelet[2515]: E0707 00:01:16.157904 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.158602 kubelet[2515]: E0707 00:01:16.158282 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.158602 kubelet[2515]: W0707 00:01:16.158297 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.158602 kubelet[2515]: E0707 00:01:16.158345 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.158602 kubelet[2515]: E0707 00:01:16.158598 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.158866 kubelet[2515]: W0707 00:01:16.158607 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.158866 kubelet[2515]: E0707 00:01:16.158616 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.158866 kubelet[2515]: E0707 00:01:16.158844 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.158866 kubelet[2515]: W0707 00:01:16.158852 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.159034 kubelet[2515]: E0707 00:01:16.158861 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.159126 kubelet[2515]: E0707 00:01:16.159110 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.159126 kubelet[2515]: W0707 00:01:16.159123 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.159583 kubelet[2515]: E0707 00:01:16.159133 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.159583 kubelet[2515]: E0707 00:01:16.159330 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.159583 kubelet[2515]: W0707 00:01:16.159336 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.159583 kubelet[2515]: E0707 00:01:16.159344 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.159583 kubelet[2515]: E0707 00:01:16.159518 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.159583 kubelet[2515]: W0707 00:01:16.159524 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.159583 kubelet[2515]: E0707 00:01:16.159532 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.159752 kubelet[2515]: E0707 00:01:16.159693 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.159752 kubelet[2515]: W0707 00:01:16.159700 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.159752 kubelet[2515]: E0707 00:01:16.159718 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.159902 kubelet[2515]: E0707 00:01:16.159887 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.159902 kubelet[2515]: W0707 00:01:16.159897 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.159958 kubelet[2515]: E0707 00:01:16.159905 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.160099 kubelet[2515]: E0707 00:01:16.160086 2515 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 00:01:16.160099 kubelet[2515]: W0707 00:01:16.160095 2515 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 00:01:16.160167 kubelet[2515]: E0707 00:01:16.160102 2515 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 00:01:16.164484 systemd[1]: cri-containerd-0b83be64d600ddaced54f46f89008d546a3a5e93de227f44bcc017e2df88b1e1.scope: Deactivated successfully. Jul 7 00:01:16.175780 containerd[1465]: time="2025-07-07T00:01:16.175703133Z" level=info msg="StartContainer for \"0b83be64d600ddaced54f46f89008d546a3a5e93de227f44bcc017e2df88b1e1\" returns successfully" Jul 7 00:01:16.496159 containerd[1465]: time="2025-07-07T00:01:16.493908719Z" level=info msg="shim disconnected" id=0b83be64d600ddaced54f46f89008d546a3a5e93de227f44bcc017e2df88b1e1 namespace=k8s.io Jul 7 00:01:16.496159 containerd[1465]: time="2025-07-07T00:01:16.496155139Z" level=warning msg="cleaning up after shim disconnected" id=0b83be64d600ddaced54f46f89008d546a3a5e93de227f44bcc017e2df88b1e1 namespace=k8s.io Jul 7 00:01:16.496159 containerd[1465]: time="2025-07-07T00:01:16.496169637Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:01:16.875044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b83be64d600ddaced54f46f89008d546a3a5e93de227f44bcc017e2df88b1e1-rootfs.mount: Deactivated successfully. Jul 7 00:01:17.051164 kubelet[2515]: E0707 00:01:17.051107 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4s7sb" podUID="2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6" Jul 7 00:01:17.119970 containerd[1465]: time="2025-07-07T00:01:17.119906150Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 00:01:19.051410 kubelet[2515]: E0707 00:01:19.051285 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4s7sb" podUID="2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6" Jul 7 00:01:20.050877 containerd[1465]: time="2025-07-07T00:01:20.050819025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:20.052159 containerd[1465]: time="2025-07-07T00:01:20.052031800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 00:01:20.053588 containerd[1465]: time="2025-07-07T00:01:20.053523541Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:20.058616 containerd[1465]: time="2025-07-07T00:01:20.058586021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:20.059811 containerd[1465]: time="2025-07-07T00:01:20.059771816Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.939813668s" Jul 7 00:01:20.059861 containerd[1465]: time="2025-07-07T00:01:20.059815178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 00:01:20.103591 containerd[1465]: time="2025-07-07T00:01:20.103505789Z" level=info msg="CreateContainer within sandbox \"ffb70262513fc4ba635e9292bed0f867cde14e6a6b0cd5299bb80a52b03091a1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 00:01:20.217482 containerd[1465]: time="2025-07-07T00:01:20.217427516Z" level=info msg="CreateContainer within sandbox \"ffb70262513fc4ba635e9292bed0f867cde14e6a6b0cd5299bb80a52b03091a1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4e40b197a3ea37322f42e0c28cef4462cd0a89cd929932bd9dc7c094b09e6424\"" Jul 7 00:01:20.218135 containerd[1465]: time="2025-07-07T00:01:20.218068324Z" level=info msg="StartContainer for \"4e40b197a3ea37322f42e0c28cef4462cd0a89cd929932bd9dc7c094b09e6424\"" Jul 7 00:01:20.262465 systemd[1]: Started cri-containerd-4e40b197a3ea37322f42e0c28cef4462cd0a89cd929932bd9dc7c094b09e6424.scope - libcontainer container 4e40b197a3ea37322f42e0c28cef4462cd0a89cd929932bd9dc7c094b09e6424. Jul 7 00:01:20.295705 containerd[1465]: time="2025-07-07T00:01:20.295636845Z" level=info msg="StartContainer for \"4e40b197a3ea37322f42e0c28cef4462cd0a89cd929932bd9dc7c094b09e6424\" returns successfully" Jul 7 00:01:21.051439 kubelet[2515]: E0707 00:01:21.051393 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4s7sb" podUID="2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6" Jul 7 00:01:21.948341 systemd[1]: cri-containerd-4e40b197a3ea37322f42e0c28cef4462cd0a89cd929932bd9dc7c094b09e6424.scope: Deactivated successfully. Jul 7 00:01:21.972054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e40b197a3ea37322f42e0c28cef4462cd0a89cd929932bd9dc7c094b09e6424-rootfs.mount: Deactivated successfully. Jul 7 00:01:21.973572 containerd[1465]: time="2025-07-07T00:01:21.973515242Z" level=info msg="shim disconnected" id=4e40b197a3ea37322f42e0c28cef4462cd0a89cd929932bd9dc7c094b09e6424 namespace=k8s.io Jul 7 00:01:21.973902 containerd[1465]: time="2025-07-07T00:01:21.973574003Z" level=warning msg="cleaning up after shim disconnected" id=4e40b197a3ea37322f42e0c28cef4462cd0a89cd929932bd9dc7c094b09e6424 namespace=k8s.io Jul 7 00:01:21.973902 containerd[1465]: time="2025-07-07T00:01:21.973584894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 00:01:22.005638 kubelet[2515]: I0707 00:01:22.005392 2515 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 00:01:22.044963 systemd[1]: Created slice kubepods-besteffort-pod00999ebd_f6ba_4e92_8b64_466bdf8e89f5.slice - libcontainer container kubepods-besteffort-pod00999ebd_f6ba_4e92_8b64_466bdf8e89f5.slice. Jul 7 00:01:22.053467 systemd[1]: Created slice kubepods-besteffort-pod8b639298_27a5_41f1_a363_3a02d9d5d0b3.slice - libcontainer container kubepods-besteffort-pod8b639298_27a5_41f1_a363_3a02d9d5d0b3.slice. Jul 7 00:01:22.061449 systemd[1]: Created slice kubepods-besteffort-pod6637e26b_2700_424e_b4eb_f1031d446d3b.slice - libcontainer container kubepods-besteffort-pod6637e26b_2700_424e_b4eb_f1031d446d3b.slice. Jul 7 00:01:22.071150 systemd[1]: Created slice kubepods-burstable-podef0cbce2_a46c_4e9a_95aa_18c3fe6c5ece.slice - libcontainer container kubepods-burstable-podef0cbce2_a46c_4e9a_95aa_18c3fe6c5ece.slice. Jul 7 00:01:22.079318 systemd[1]: Created slice kubepods-besteffort-pod70595641_eb8d_401b_b091_967934cddf8d.slice - libcontainer container kubepods-besteffort-pod70595641_eb8d_401b_b091_967934cddf8d.slice. Jul 7 00:01:22.084453 systemd[1]: Created slice kubepods-besteffort-pod05a5d423_9d7b_48ae_ba84_e25c4fe860af.slice - libcontainer container kubepods-besteffort-pod05a5d423_9d7b_48ae_ba84_e25c4fe860af.slice. Jul 7 00:01:22.092066 systemd[1]: Created slice kubepods-burstable-pod24b17fed_8f09_46ce_964b_ca73d8ad630a.slice - libcontainer container kubepods-burstable-pod24b17fed_8f09_46ce_964b_ca73d8ad630a.slice. Jul 7 00:01:22.124120 kubelet[2515]: I0707 00:01:22.124062 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70595641-eb8d-401b-b091-967934cddf8d-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-cdfxn\" (UID: \"70595641-eb8d-401b-b091-967934cddf8d\") " pod="calico-system/goldmane-768f4c5c69-cdfxn" Jul 7 00:01:22.124120 kubelet[2515]: I0707 00:01:22.124100 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/05a5d423-9d7b-48ae-ba84-e25c4fe860af-calico-apiserver-certs\") pod \"calico-apiserver-7b996bf4d6-mtmf7\" (UID: \"05a5d423-9d7b-48ae-ba84-e25c4fe860af\") " pod="calico-apiserver/calico-apiserver-7b996bf4d6-mtmf7" Jul 7 00:01:22.124120 kubelet[2515]: I0707 00:01:22.124116 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00999ebd-f6ba-4e92-8b64-466bdf8e89f5-tigera-ca-bundle\") pod \"calico-kube-controllers-5c7b8cc4b5-cbtqq\" (UID: \"00999ebd-f6ba-4e92-8b64-466bdf8e89f5\") " pod="calico-system/calico-kube-controllers-5c7b8cc4b5-cbtqq" Jul 7 00:01:22.124120 kubelet[2515]: I0707 00:01:22.124130 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-296rs\" (UniqueName: \"kubernetes.io/projected/70595641-eb8d-401b-b091-967934cddf8d-kube-api-access-296rs\") pod \"goldmane-768f4c5c69-cdfxn\" (UID: \"70595641-eb8d-401b-b091-967934cddf8d\") " pod="calico-system/goldmane-768f4c5c69-cdfxn" Jul 7 00:01:22.124715 kubelet[2515]: I0707 00:01:22.124145 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw7h8\" (UniqueName: \"kubernetes.io/projected/05a5d423-9d7b-48ae-ba84-e25c4fe860af-kube-api-access-hw7h8\") pod \"calico-apiserver-7b996bf4d6-mtmf7\" (UID: \"05a5d423-9d7b-48ae-ba84-e25c4fe860af\") " pod="calico-apiserver/calico-apiserver-7b996bf4d6-mtmf7" Jul 7 00:01:22.124715 kubelet[2515]: I0707 00:01:22.124200 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece-config-volume\") pod \"coredns-674b8bbfcf-jpvlw\" (UID: \"ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece\") " pod="kube-system/coredns-674b8bbfcf-jpvlw" Jul 7 00:01:22.124715 kubelet[2515]: I0707 00:01:22.124242 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b639298-27a5-41f1-a363-3a02d9d5d0b3-whisker-backend-key-pair\") pod \"whisker-7d4f4f56d5-58mzt\" (UID: \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\") " pod="calico-system/whisker-7d4f4f56d5-58mzt" Jul 7 00:01:22.124715 kubelet[2515]: I0707 00:01:22.124271 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b639298-27a5-41f1-a363-3a02d9d5d0b3-whisker-ca-bundle\") pod \"whisker-7d4f4f56d5-58mzt\" (UID: \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\") " pod="calico-system/whisker-7d4f4f56d5-58mzt" Jul 7 00:01:22.124715 kubelet[2515]: I0707 00:01:22.124339 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24b17fed-8f09-46ce-964b-ca73d8ad630a-config-volume\") pod \"coredns-674b8bbfcf-dvkj4\" (UID: \"24b17fed-8f09-46ce-964b-ca73d8ad630a\") " pod="kube-system/coredns-674b8bbfcf-dvkj4" Jul 7 00:01:22.124826 kubelet[2515]: I0707 00:01:22.124360 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjnnx\" (UniqueName: \"kubernetes.io/projected/8b639298-27a5-41f1-a363-3a02d9d5d0b3-kube-api-access-zjnnx\") pod \"whisker-7d4f4f56d5-58mzt\" (UID: \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\") " pod="calico-system/whisker-7d4f4f56d5-58mzt" Jul 7 00:01:22.124826 kubelet[2515]: I0707 00:01:22.124380 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qjcv\" (UniqueName: \"kubernetes.io/projected/6637e26b-2700-424e-b4eb-f1031d446d3b-kube-api-access-2qjcv\") pod \"calico-apiserver-7b996bf4d6-k76r4\" (UID: \"6637e26b-2700-424e-b4eb-f1031d446d3b\") " pod="calico-apiserver/calico-apiserver-7b996bf4d6-k76r4" Jul 7 00:01:22.124826 kubelet[2515]: I0707 00:01:22.124397 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd4c7\" (UniqueName: \"kubernetes.io/projected/00999ebd-f6ba-4e92-8b64-466bdf8e89f5-kube-api-access-fd4c7\") pod \"calico-kube-controllers-5c7b8cc4b5-cbtqq\" (UID: \"00999ebd-f6ba-4e92-8b64-466bdf8e89f5\") " pod="calico-system/calico-kube-controllers-5c7b8cc4b5-cbtqq" Jul 7 00:01:22.124826 kubelet[2515]: I0707 00:01:22.124411 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/70595641-eb8d-401b-b091-967934cddf8d-goldmane-key-pair\") pod \"goldmane-768f4c5c69-cdfxn\" (UID: \"70595641-eb8d-401b-b091-967934cddf8d\") " pod="calico-system/goldmane-768f4c5c69-cdfxn" Jul 7 00:01:22.124826 kubelet[2515]: I0707 00:01:22.124468 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z5p6\" (UniqueName: \"kubernetes.io/projected/24b17fed-8f09-46ce-964b-ca73d8ad630a-kube-api-access-8z5p6\") pod \"coredns-674b8bbfcf-dvkj4\" (UID: \"24b17fed-8f09-46ce-964b-ca73d8ad630a\") " pod="kube-system/coredns-674b8bbfcf-dvkj4" Jul 7 00:01:22.124957 kubelet[2515]: I0707 00:01:22.124500 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6637e26b-2700-424e-b4eb-f1031d446d3b-calico-apiserver-certs\") pod \"calico-apiserver-7b996bf4d6-k76r4\" (UID: \"6637e26b-2700-424e-b4eb-f1031d446d3b\") " pod="calico-apiserver/calico-apiserver-7b996bf4d6-k76r4" Jul 7 00:01:22.124957 kubelet[2515]: I0707 00:01:22.124533 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mkj9\" (UniqueName: \"kubernetes.io/projected/ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece-kube-api-access-7mkj9\") pod \"coredns-674b8bbfcf-jpvlw\" (UID: \"ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece\") " pod="kube-system/coredns-674b8bbfcf-jpvlw" Jul 7 00:01:22.124957 kubelet[2515]: I0707 00:01:22.124552 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/70595641-eb8d-401b-b091-967934cddf8d-config\") pod \"goldmane-768f4c5c69-cdfxn\" (UID: \"70595641-eb8d-401b-b091-967934cddf8d\") " pod="calico-system/goldmane-768f4c5c69-cdfxn" Jul 7 00:01:22.131493 containerd[1465]: time="2025-07-07T00:01:22.131455504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 00:01:22.350987 containerd[1465]: time="2025-07-07T00:01:22.350949462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7b8cc4b5-cbtqq,Uid:00999ebd-f6ba-4e92-8b64-466bdf8e89f5,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:22.359451 containerd[1465]: time="2025-07-07T00:01:22.359417460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4f4f56d5-58mzt,Uid:8b639298-27a5-41f1-a363-3a02d9d5d0b3,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:22.374525 containerd[1465]: time="2025-07-07T00:01:22.374470429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b996bf4d6-k76r4,Uid:6637e26b-2700-424e-b4eb-f1031d446d3b,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:01:22.376848 kubelet[2515]: E0707 00:01:22.376798 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:22.377441 containerd[1465]: time="2025-07-07T00:01:22.377253179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpvlw,Uid:ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:22.386818 containerd[1465]: time="2025-07-07T00:01:22.386759332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cdfxn,Uid:70595641-eb8d-401b-b091-967934cddf8d,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:22.394656 kubelet[2515]: E0707 00:01:22.394583 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:22.396228 containerd[1465]: time="2025-07-07T00:01:22.396175165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b996bf4d6-mtmf7,Uid:05a5d423-9d7b-48ae-ba84-e25c4fe860af,Namespace:calico-apiserver,Attempt:0,}" Jul 7 00:01:22.396927 containerd[1465]: time="2025-07-07T00:01:22.396473277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvkj4,Uid:24b17fed-8f09-46ce-964b-ca73d8ad630a,Namespace:kube-system,Attempt:0,}" Jul 7 00:01:22.634120 containerd[1465]: time="2025-07-07T00:01:22.634001969Z" level=error msg="Failed to destroy network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.637719 containerd[1465]: time="2025-07-07T00:01:22.637679385Z" level=error msg="Failed to destroy network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.640650 containerd[1465]: time="2025-07-07T00:01:22.640600697Z" level=error msg="encountered an error cleaning up failed sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.640725 containerd[1465]: time="2025-07-07T00:01:22.640665037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7b8cc4b5-cbtqq,Uid:00999ebd-f6ba-4e92-8b64-466bdf8e89f5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.641027 kubelet[2515]: E0707 00:01:22.640964 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.641097 kubelet[2515]: E0707 00:01:22.641058 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c7b8cc4b5-cbtqq" Jul 7 00:01:22.641136 kubelet[2515]: E0707 00:01:22.641092 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c7b8cc4b5-cbtqq" Jul 7 00:01:22.641214 kubelet[2515]: E0707 00:01:22.641166 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c7b8cc4b5-cbtqq_calico-system(00999ebd-f6ba-4e92-8b64-466bdf8e89f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c7b8cc4b5-cbtqq_calico-system(00999ebd-f6ba-4e92-8b64-466bdf8e89f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c7b8cc4b5-cbtqq" podUID="00999ebd-f6ba-4e92-8b64-466bdf8e89f5" Jul 7 00:01:22.646344 containerd[1465]: time="2025-07-07T00:01:22.646290682Z" level=error msg="encountered an error cleaning up failed sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.646407 containerd[1465]: time="2025-07-07T00:01:22.646357158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d4f4f56d5-58mzt,Uid:8b639298-27a5-41f1-a363-3a02d9d5d0b3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.646523 kubelet[2515]: E0707 00:01:22.646498 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.646588 kubelet[2515]: E0707 00:01:22.646533 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d4f4f56d5-58mzt" Jul 7 00:01:22.646588 kubelet[2515]: E0707 00:01:22.646550 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d4f4f56d5-58mzt" Jul 7 00:01:22.646663 kubelet[2515]: E0707 00:01:22.646592 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d4f4f56d5-58mzt_calico-system(8b639298-27a5-41f1-a363-3a02d9d5d0b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d4f4f56d5-58mzt_calico-system(8b639298-27a5-41f1-a363-3a02d9d5d0b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d4f4f56d5-58mzt" podUID="8b639298-27a5-41f1-a363-3a02d9d5d0b3" Jul 7 00:01:22.792498 containerd[1465]: time="2025-07-07T00:01:22.792411028Z" level=error msg="Failed to destroy network for sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.796182 containerd[1465]: time="2025-07-07T00:01:22.796088254Z" level=error msg="encountered an error cleaning up failed sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.797172 containerd[1465]: time="2025-07-07T00:01:22.797146106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b996bf4d6-k76r4,Uid:6637e26b-2700-424e-b4eb-f1031d446d3b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.797522 kubelet[2515]: E0707 00:01:22.797485 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.797638 kubelet[2515]: E0707 00:01:22.797622 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b996bf4d6-k76r4" Jul 7 00:01:22.797721 kubelet[2515]: E0707 00:01:22.797707 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b996bf4d6-k76r4" Jul 7 00:01:22.797840 kubelet[2515]: E0707 00:01:22.797815 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b996bf4d6-k76r4_calico-apiserver(6637e26b-2700-424e-b4eb-f1031d446d3b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b996bf4d6-k76r4_calico-apiserver(6637e26b-2700-424e-b4eb-f1031d446d3b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b996bf4d6-k76r4" podUID="6637e26b-2700-424e-b4eb-f1031d446d3b" Jul 7 00:01:22.801442 containerd[1465]: time="2025-07-07T00:01:22.801378106Z" level=error msg="Failed to destroy network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.802172 containerd[1465]: time="2025-07-07T00:01:22.802138157Z" level=error msg="encountered an error cleaning up failed sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.802232 containerd[1465]: time="2025-07-07T00:01:22.802197670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cdfxn,Uid:70595641-eb8d-401b-b091-967934cddf8d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.802539 kubelet[2515]: E0707 00:01:22.802494 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.802605 kubelet[2515]: E0707 00:01:22.802573 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-cdfxn" Jul 7 00:01:22.802605 kubelet[2515]: E0707 00:01:22.802596 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-cdfxn" Jul 7 00:01:22.802706 kubelet[2515]: E0707 00:01:22.802654 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-cdfxn_calico-system(70595641-eb8d-401b-b091-967934cddf8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-cdfxn_calico-system(70595641-eb8d-401b-b091-967934cddf8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-cdfxn" podUID="70595641-eb8d-401b-b091-967934cddf8d" Jul 7 00:01:22.812686 containerd[1465]: time="2025-07-07T00:01:22.812621370Z" level=error msg="Failed to destroy network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.813103 containerd[1465]: time="2025-07-07T00:01:22.813058443Z" level=error msg="encountered an error cleaning up failed sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.813184 containerd[1465]: time="2025-07-07T00:01:22.813149534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvkj4,Uid:24b17fed-8f09-46ce-964b-ca73d8ad630a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.813568 kubelet[2515]: E0707 00:01:22.813518 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.813649 kubelet[2515]: E0707 00:01:22.813592 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvkj4" Jul 7 00:01:22.813649 kubelet[2515]: E0707 00:01:22.813618 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dvkj4" Jul 7 00:01:22.813726 kubelet[2515]: E0707 00:01:22.813672 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dvkj4_kube-system(24b17fed-8f09-46ce-964b-ca73d8ad630a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dvkj4_kube-system(24b17fed-8f09-46ce-964b-ca73d8ad630a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dvkj4" podUID="24b17fed-8f09-46ce-964b-ca73d8ad630a" Jul 7 00:01:22.814194 containerd[1465]: time="2025-07-07T00:01:22.814153976Z" level=error msg="Failed to destroy network for sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.814787 containerd[1465]: time="2025-07-07T00:01:22.814670399Z" level=error msg="encountered an error cleaning up failed sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.814787 containerd[1465]: time="2025-07-07T00:01:22.814723278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpvlw,Uid:ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.815551 kubelet[2515]: E0707 00:01:22.814940 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.815551 kubelet[2515]: E0707 00:01:22.814974 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jpvlw" Jul 7 00:01:22.815551 kubelet[2515]: E0707 00:01:22.814990 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jpvlw" Jul 7 00:01:22.815671 kubelet[2515]: E0707 00:01:22.815029 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jpvlw_kube-system(ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jpvlw_kube-system(ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jpvlw" podUID="ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece" Jul 7 00:01:22.840454 containerd[1465]: time="2025-07-07T00:01:22.840406007Z" level=error msg="Failed to destroy network for sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.840793 containerd[1465]: time="2025-07-07T00:01:22.840765594Z" level=error msg="encountered an error cleaning up failed sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.840831 containerd[1465]: time="2025-07-07T00:01:22.840811810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b996bf4d6-mtmf7,Uid:05a5d423-9d7b-48ae-ba84-e25c4fe860af,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.841074 kubelet[2515]: E0707 00:01:22.841037 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:22.841132 kubelet[2515]: E0707 00:01:22.841106 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b996bf4d6-mtmf7" Jul 7 00:01:22.841160 kubelet[2515]: E0707 00:01:22.841131 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b996bf4d6-mtmf7" Jul 7 00:01:22.841206 kubelet[2515]: E0707 00:01:22.841183 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b996bf4d6-mtmf7_calico-apiserver(05a5d423-9d7b-48ae-ba84-e25c4fe860af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b996bf4d6-mtmf7_calico-apiserver(05a5d423-9d7b-48ae-ba84-e25c4fe860af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b996bf4d6-mtmf7" podUID="05a5d423-9d7b-48ae-ba84-e25c4fe860af" Jul 7 00:01:22.975098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106-shm.mount: Deactivated successfully. Jul 7 00:01:23.056277 systemd[1]: Created slice kubepods-besteffort-pod2fd0fbaa_66a3_48fe_88c7_fc37b194a8d6.slice - libcontainer container kubepods-besteffort-pod2fd0fbaa_66a3_48fe_88c7_fc37b194a8d6.slice. Jul 7 00:01:23.058471 containerd[1465]: time="2025-07-07T00:01:23.058428374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4s7sb,Uid:2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:23.116185 containerd[1465]: time="2025-07-07T00:01:23.116128520Z" level=error msg="Failed to destroy network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.116579 containerd[1465]: time="2025-07-07T00:01:23.116548621Z" level=error msg="encountered an error cleaning up failed sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.116620 containerd[1465]: time="2025-07-07T00:01:23.116596051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4s7sb,Uid:2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.116930 kubelet[2515]: E0707 00:01:23.116894 2515 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.117012 kubelet[2515]: E0707 00:01:23.116959 2515 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4s7sb" Jul 7 00:01:23.117012 kubelet[2515]: E0707 00:01:23.116981 2515 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4s7sb" Jul 7 00:01:23.117123 kubelet[2515]: E0707 00:01:23.117036 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4s7sb_calico-system(2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4s7sb_calico-system(2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4s7sb" podUID="2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6" Jul 7 00:01:23.118555 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055-shm.mount: Deactivated successfully. Jul 7 00:01:23.132721 kubelet[2515]: I0707 00:01:23.132695 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:23.133428 containerd[1465]: time="2025-07-07T00:01:23.133369671Z" level=info msg="StopPodSandbox for \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\"" Jul 7 00:01:23.134632 containerd[1465]: time="2025-07-07T00:01:23.134606690Z" level=info msg="Ensure that sandbox 81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098 in task-service has been cleanup successfully" Jul 7 00:01:23.135998 kubelet[2515]: I0707 00:01:23.135639 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:23.136214 containerd[1465]: time="2025-07-07T00:01:23.136140939Z" level=info msg="StopPodSandbox for \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\"" Jul 7 00:01:23.136936 containerd[1465]: time="2025-07-07T00:01:23.136569355Z" level=info msg="Ensure that sandbox 8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42 in task-service has been cleanup successfully" Jul 7 00:01:23.137010 kubelet[2515]: I0707 00:01:23.136647 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:23.137879 containerd[1465]: time="2025-07-07T00:01:23.137212346Z" level=info msg="StopPodSandbox for \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\"" Jul 7 00:01:23.139344 containerd[1465]: time="2025-07-07T00:01:23.138243818Z" level=info msg="Ensure that sandbox 00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07 in task-service has been cleanup successfully" Jul 7 00:01:23.139419 kubelet[2515]: I0707 00:01:23.138358 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:23.153036 containerd[1465]: time="2025-07-07T00:01:23.152244979Z" level=info msg="StopPodSandbox for \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\"" Jul 7 00:01:23.154471 containerd[1465]: time="2025-07-07T00:01:23.154439100Z" level=info msg="Ensure that sandbox bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90 in task-service has been cleanup successfully" Jul 7 00:01:23.156172 kubelet[2515]: I0707 00:01:23.156154 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:23.159688 containerd[1465]: time="2025-07-07T00:01:23.158993574Z" level=info msg="StopPodSandbox for \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\"" Jul 7 00:01:23.196105 containerd[1465]: time="2025-07-07T00:01:23.196055033Z" level=info msg="Ensure that sandbox 2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0 in task-service has been cleanup successfully" Jul 7 00:01:23.200393 kubelet[2515]: I0707 00:01:23.199352 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:23.204377 containerd[1465]: time="2025-07-07T00:01:23.202974911Z" level=error msg="StopPodSandbox for \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\" failed" error="failed to destroy network for sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.205942 kubelet[2515]: E0707 00:01:23.205914 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:23.206110 kubelet[2515]: E0707 00:01:23.206069 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098"} Jul 7 00:01:23.206402 kubelet[2515]: E0707 00:01:23.206253 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"05a5d423-9d7b-48ae-ba84-e25c4fe860af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:23.206768 kubelet[2515]: E0707 00:01:23.206662 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"05a5d423-9d7b-48ae-ba84-e25c4fe860af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b996bf4d6-mtmf7" podUID="05a5d423-9d7b-48ae-ba84-e25c4fe860af" Jul 7 00:01:23.229410 containerd[1465]: time="2025-07-07T00:01:23.228785015Z" level=info msg="StopPodSandbox for \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\"" Jul 7 00:01:23.233680 kubelet[2515]: I0707 00:01:23.233644 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:23.238734 containerd[1465]: time="2025-07-07T00:01:23.238674194Z" level=info msg="StopPodSandbox for \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\"" Jul 7 00:01:23.243655 containerd[1465]: time="2025-07-07T00:01:23.243616210Z" level=info msg="Ensure that sandbox 7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055 in task-service has been cleanup successfully" Jul 7 00:01:23.261254 containerd[1465]: time="2025-07-07T00:01:23.261205185Z" level=info msg="Ensure that sandbox 51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106 in task-service has been cleanup successfully" Jul 7 00:01:23.268613 containerd[1465]: time="2025-07-07T00:01:23.268560293Z" level=error msg="StopPodSandbox for \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\" failed" error="failed to destroy network for sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.269390 kubelet[2515]: E0707 00:01:23.268800 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:23.269390 kubelet[2515]: E0707 00:01:23.268868 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07"} Jul 7 00:01:23.269390 kubelet[2515]: E0707 00:01:23.268905 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:23.269390 kubelet[2515]: E0707 00:01:23.268931 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jpvlw" podUID="ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece" Jul 7 00:01:23.269390 kubelet[2515]: I0707 00:01:23.269362 2515 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:23.270808 containerd[1465]: time="2025-07-07T00:01:23.270686155Z" level=info msg="StopPodSandbox for \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\"" Jul 7 00:01:23.271123 containerd[1465]: time="2025-07-07T00:01:23.271069697Z" level=info msg="Ensure that sandbox 983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf in task-service has been cleanup successfully" Jul 7 00:01:23.291381 containerd[1465]: time="2025-07-07T00:01:23.291167126Z" level=error msg="StopPodSandbox for \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\" failed" error="failed to destroy network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.291554 kubelet[2515]: E0707 00:01:23.291451 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:23.291554 kubelet[2515]: E0707 00:01:23.291511 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42"} Jul 7 00:01:23.291652 kubelet[2515]: E0707 00:01:23.291554 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"70595641-eb8d-401b-b091-967934cddf8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:23.291652 kubelet[2515]: E0707 00:01:23.291581 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"70595641-eb8d-401b-b091-967934cddf8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-cdfxn" podUID="70595641-eb8d-401b-b091-967934cddf8d" Jul 7 00:01:23.291917 containerd[1465]: time="2025-07-07T00:01:23.291870060Z" level=error msg="StopPodSandbox for \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\" failed" error="failed to destroy network for sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.292086 kubelet[2515]: E0707 00:01:23.292055 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:23.292152 kubelet[2515]: E0707 00:01:23.292134 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90"} Jul 7 00:01:23.292206 kubelet[2515]: E0707 00:01:23.292185 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6637e26b-2700-424e-b4eb-f1031d446d3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:23.292258 kubelet[2515]: E0707 00:01:23.292208 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6637e26b-2700-424e-b4eb-f1031d446d3b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b996bf4d6-k76r4" podUID="6637e26b-2700-424e-b4eb-f1031d446d3b" Jul 7 00:01:23.299173 containerd[1465]: time="2025-07-07T00:01:23.299032685Z" level=error msg="StopPodSandbox for \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\" failed" error="failed to destroy network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.299538 kubelet[2515]: E0707 00:01:23.299480 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:23.299595 kubelet[2515]: E0707 00:01:23.299551 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055"} Jul 7 00:01:23.299595 kubelet[2515]: E0707 00:01:23.299584 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:23.299691 kubelet[2515]: E0707 00:01:23.299607 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4s7sb" podUID="2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6" Jul 7 00:01:23.308885 containerd[1465]: time="2025-07-07T00:01:23.308820223Z" level=error msg="StopPodSandbox for \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\" failed" error="failed to destroy network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.309147 kubelet[2515]: E0707 00:01:23.309111 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:23.309215 kubelet[2515]: E0707 00:01:23.309163 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0"} Jul 7 00:01:23.309215 kubelet[2515]: E0707 00:01:23.309197 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:23.309319 kubelet[2515]: E0707 00:01:23.309221 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d4f4f56d5-58mzt" podUID="8b639298-27a5-41f1-a363-3a02d9d5d0b3" Jul 7 00:01:23.309686 containerd[1465]: time="2025-07-07T00:01:23.309512757Z" level=error msg="StopPodSandbox for \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\" failed" error="failed to destroy network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.309779 kubelet[2515]: E0707 00:01:23.309742 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:23.309817 kubelet[2515]: E0707 00:01:23.309781 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106"} Jul 7 00:01:23.309817 kubelet[2515]: E0707 00:01:23.309803 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00999ebd-f6ba-4e92-8b64-466bdf8e89f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:23.309916 kubelet[2515]: E0707 00:01:23.309820 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00999ebd-f6ba-4e92-8b64-466bdf8e89f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c7b8cc4b5-cbtqq" podUID="00999ebd-f6ba-4e92-8b64-466bdf8e89f5" Jul 7 00:01:23.311719 containerd[1465]: time="2025-07-07T00:01:23.311669578Z" level=error msg="StopPodSandbox for \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\" failed" error="failed to destroy network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:23.311900 kubelet[2515]: E0707 00:01:23.311849 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:23.311951 kubelet[2515]: E0707 00:01:23.311920 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf"} Jul 7 00:01:23.311997 kubelet[2515]: E0707 00:01:23.311974 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24b17fed-8f09-46ce-964b-ca73d8ad630a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:23.312044 kubelet[2515]: E0707 00:01:23.312011 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24b17fed-8f09-46ce-964b-ca73d8ad630a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dvkj4" podUID="24b17fed-8f09-46ce-964b-ca73d8ad630a" Jul 7 00:01:29.011983 kubelet[2515]: I0707 00:01:29.011906 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:01:29.012605 kubelet[2515]: E0707 00:01:29.012393 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:29.282651 kubelet[2515]: E0707 00:01:29.281405 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:31.073957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2875770296.mount: Deactivated successfully. Jul 7 00:01:33.610007 containerd[1465]: time="2025-07-07T00:01:33.609930500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:33.663429 containerd[1465]: time="2025-07-07T00:01:33.663341203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 00:01:33.665073 containerd[1465]: time="2025-07-07T00:01:33.665044453Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:33.669445 containerd[1465]: time="2025-07-07T00:01:33.669375431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:33.670019 containerd[1465]: time="2025-07-07T00:01:33.669965590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 11.538466495s" Jul 7 00:01:33.670019 containerd[1465]: time="2025-07-07T00:01:33.670006337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 00:01:33.690870 containerd[1465]: time="2025-07-07T00:01:33.690824746Z" level=info msg="CreateContainer within sandbox \"ffb70262513fc4ba635e9292bed0f867cde14e6a6b0cd5299bb80a52b03091a1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 00:01:33.720410 containerd[1465]: time="2025-07-07T00:01:33.720366326Z" level=info msg="CreateContainer within sandbox \"ffb70262513fc4ba635e9292bed0f867cde14e6a6b0cd5299bb80a52b03091a1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2b67e7af53582c8546817b52f36ca50310e6ad98b1bb1a100261b7a6cb0157f0\"" Jul 7 00:01:33.720985 containerd[1465]: time="2025-07-07T00:01:33.720959321Z" level=info msg="StartContainer for \"2b67e7af53582c8546817b52f36ca50310e6ad98b1bb1a100261b7a6cb0157f0\"" Jul 7 00:01:33.784622 systemd[1]: Started cri-containerd-2b67e7af53582c8546817b52f36ca50310e6ad98b1bb1a100261b7a6cb0157f0.scope - libcontainer container 2b67e7af53582c8546817b52f36ca50310e6ad98b1bb1a100261b7a6cb0157f0. Jul 7 00:01:34.051677 containerd[1465]: time="2025-07-07T00:01:34.051461074Z" level=info msg="StopPodSandbox for \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\"" Jul 7 00:01:34.312101 containerd[1465]: time="2025-07-07T00:01:34.311964973Z" level=error msg="StopPodSandbox for \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\" failed" error="failed to destroy network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:34.312856 kubelet[2515]: E0707 00:01:34.312294 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:34.312856 kubelet[2515]: E0707 00:01:34.312383 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0"} Jul 7 00:01:34.312856 kubelet[2515]: E0707 00:01:34.312430 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:34.312856 kubelet[2515]: E0707 00:01:34.312465 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d4f4f56d5-58mzt" podUID="8b639298-27a5-41f1-a363-3a02d9d5d0b3" Jul 7 00:01:34.322237 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 00:01:34.322354 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 00:01:34.484150 containerd[1465]: time="2025-07-07T00:01:34.484076580Z" level=info msg="StartContainer for \"2b67e7af53582c8546817b52f36ca50310e6ad98b1bb1a100261b7a6cb0157f0\" returns successfully" Jul 7 00:01:36.052232 containerd[1465]: time="2025-07-07T00:01:36.052178593Z" level=info msg="StopPodSandbox for \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\"" Jul 7 00:01:36.052232 containerd[1465]: time="2025-07-07T00:01:36.052219791Z" level=info msg="StopPodSandbox for \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\"" Jul 7 00:01:36.052679 containerd[1465]: time="2025-07-07T00:01:36.052527949Z" level=info msg="StopPodSandbox for \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\"" Jul 7 00:01:36.053883 containerd[1465]: time="2025-07-07T00:01:36.052178493Z" level=info msg="StopPodSandbox for \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\"" Jul 7 00:01:36.083347 containerd[1465]: time="2025-07-07T00:01:36.083284140Z" level=error msg="StopPodSandbox for \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\" failed" error="failed to destroy network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:36.083574 containerd[1465]: time="2025-07-07T00:01:36.083284831Z" level=error msg="StopPodSandbox for \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\" failed" error="failed to destroy network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:36.083702 kubelet[2515]: E0707 00:01:36.083672 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:36.084150 kubelet[2515]: E0707 00:01:36.083717 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106"} Jul 7 00:01:36.084150 kubelet[2515]: E0707 00:01:36.083750 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00999ebd-f6ba-4e92-8b64-466bdf8e89f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:36.084150 kubelet[2515]: E0707 00:01:36.083772 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00999ebd-f6ba-4e92-8b64-466bdf8e89f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c7b8cc4b5-cbtqq" podUID="00999ebd-f6ba-4e92-8b64-466bdf8e89f5" Jul 7 00:01:36.084150 kubelet[2515]: E0707 00:01:36.083801 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:36.084150 kubelet[2515]: E0707 00:01:36.083814 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055"} Jul 7 00:01:36.084383 kubelet[2515]: E0707 00:01:36.083830 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:36.084383 kubelet[2515]: E0707 00:01:36.083846 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4s7sb" podUID="2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6" Jul 7 00:01:36.085250 containerd[1465]: time="2025-07-07T00:01:36.085220388Z" level=error msg="StopPodSandbox for \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\" failed" error="failed to destroy network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:36.085362 kubelet[2515]: E0707 00:01:36.085335 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:36.085401 kubelet[2515]: E0707 00:01:36.085366 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf"} Jul 7 00:01:36.085401 kubelet[2515]: E0707 00:01:36.085387 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24b17fed-8f09-46ce-964b-ca73d8ad630a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:36.085492 kubelet[2515]: E0707 00:01:36.085405 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24b17fed-8f09-46ce-964b-ca73d8ad630a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dvkj4" podUID="24b17fed-8f09-46ce-964b-ca73d8ad630a" Jul 7 00:01:36.249372 containerd[1465]: time="2025-07-07T00:01:36.249295278Z" level=error msg="StopPodSandbox for \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\" failed" error="failed to destroy network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 00:01:36.250341 kubelet[2515]: E0707 00:01:36.249591 2515 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:36.250341 kubelet[2515]: E0707 00:01:36.249659 2515 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42"} Jul 7 00:01:36.250341 kubelet[2515]: E0707 00:01:36.249701 2515 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"70595641-eb8d-401b-b091-967934cddf8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 00:01:36.250341 kubelet[2515]: E0707 00:01:36.249736 2515 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"70595641-eb8d-401b-b091-967934cddf8d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-cdfxn" podUID="70595641-eb8d-401b-b091-967934cddf8d" Jul 7 00:01:37.051754 containerd[1465]: time="2025-07-07T00:01:37.051709446Z" level=info msg="StopPodSandbox for \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\"" Jul 7 00:01:37.457159 kubelet[2515]: I0707 00:01:37.457030 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-lk8cd" podStartSLOduration=5.099504692 podStartE2EDuration="25.456999589s" podCreationTimestamp="2025-07-07 00:01:12 +0000 UTC" firstStartedPulling="2025-07-07 00:01:13.313316472 +0000 UTC m=+17.378917936" lastFinishedPulling="2025-07-07 00:01:33.670811369 +0000 UTC m=+37.736412833" observedRunningTime="2025-07-07 00:01:35.873089982 +0000 UTC m=+39.938691447" watchObservedRunningTime="2025-07-07 00:01:37.456999589 +0000 UTC m=+41.522601043" Jul 7 00:01:37.460998 containerd[1465]: time="2025-07-07T00:01:37.460278296Z" level=info msg="StopPodSandbox for \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\"" Jul 7 00:01:37.542947 systemd[1]: Started sshd@7-10.0.0.146:22-10.0.0.1:46182.service - OpenSSH per-connection server daemon (10.0.0.1:46182). Jul 7 00:01:37.595577 sshd[4021]: Accepted publickey for core from 10.0.0.1 port 46182 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:01:37.597678 sshd[4021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:37.603716 systemd-logind[1455]: New session 8 of user core. Jul 7 00:01:37.608595 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.520 [INFO][3990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.520 [INFO][3990] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" iface="eth0" netns="/var/run/netns/cni-32b445d7-e9aa-51b3-34bb-a02f352377c2" Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.521 [INFO][3990] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" iface="eth0" netns="/var/run/netns/cni-32b445d7-e9aa-51b3-34bb-a02f352377c2" Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.522 [INFO][3990] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" iface="eth0" netns="/var/run/netns/cni-32b445d7-e9aa-51b3-34bb-a02f352377c2" Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.523 [INFO][3990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.523 [INFO][3990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.605 [INFO][4019] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" HandleID="k8s-pod-network.81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.606 [INFO][4019] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.606 [INFO][4019] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.614 [WARNING][4019] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" HandleID="k8s-pod-network.81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.614 [INFO][4019] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" HandleID="k8s-pod-network.81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.619 [INFO][4019] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:37.626817 containerd[1465]: 2025-07-07 00:01:37.623 [INFO][3990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:37.627880 containerd[1465]: time="2025-07-07T00:01:37.627652730Z" level=info msg="TearDown network for sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\" successfully" Jul 7 00:01:37.627880 containerd[1465]: time="2025-07-07T00:01:37.627696573Z" level=info msg="StopPodSandbox for \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\" returns successfully" Jul 7 00:01:37.629149 containerd[1465]: time="2025-07-07T00:01:37.629124634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b996bf4d6-mtmf7,Uid:05a5d423-9d7b-48ae-ba84-e25c4fe860af,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:01:37.631872 systemd[1]: run-netns-cni\x2d32b445d7\x2de9aa\x2d51b3\x2d34bb\x2da02f352377c2.mount: Deactivated successfully. Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.564 [INFO][4009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.564 [INFO][4009] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" iface="eth0" netns="/var/run/netns/cni-a425f223-e7c1-62de-f603-29f462123f40" Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.564 [INFO][4009] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" iface="eth0" netns="/var/run/netns/cni-a425f223-e7c1-62de-f603-29f462123f40" Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.566 [INFO][4009] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" iface="eth0" netns="/var/run/netns/cni-a425f223-e7c1-62de-f603-29f462123f40" Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.567 [INFO][4009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.567 [INFO][4009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.605 [INFO][4027] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" HandleID="k8s-pod-network.2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Workload="localhost-k8s-whisker--7d4f4f56d5--58mzt-eth0" Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.606 [INFO][4027] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.619 [INFO][4027] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.629 [WARNING][4027] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" HandleID="k8s-pod-network.2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Workload="localhost-k8s-whisker--7d4f4f56d5--58mzt-eth0" Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.629 [INFO][4027] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" HandleID="k8s-pod-network.2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Workload="localhost-k8s-whisker--7d4f4f56d5--58mzt-eth0" Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.632 [INFO][4027] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:37.638541 containerd[1465]: 2025-07-07 00:01:37.634 [INFO][4009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:37.639517 containerd[1465]: time="2025-07-07T00:01:37.639473782Z" level=info msg="TearDown network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\" successfully" Jul 7 00:01:37.639517 containerd[1465]: time="2025-07-07T00:01:37.639507124Z" level=info msg="StopPodSandbox for \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\" returns successfully" Jul 7 00:01:37.641562 systemd[1]: run-netns-cni\x2da425f223\x2de7c1\x2d62de\x2df603\x2d29f462123f40.mount: Deactivated successfully. Jul 7 00:01:37.730474 kubelet[2515]: I0707 00:01:37.729791 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b639298-27a5-41f1-a363-3a02d9d5d0b3-whisker-backend-key-pair\") pod \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\" (UID: \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\") " Jul 7 00:01:37.730474 kubelet[2515]: I0707 00:01:37.729890 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b639298-27a5-41f1-a363-3a02d9d5d0b3-whisker-ca-bundle\") pod \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\" (UID: \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\") " Jul 7 00:01:37.730474 kubelet[2515]: I0707 00:01:37.729910 2515 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjnnx\" (UniqueName: \"kubernetes.io/projected/8b639298-27a5-41f1-a363-3a02d9d5d0b3-kube-api-access-zjnnx\") pod \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\" (UID: \"8b639298-27a5-41f1-a363-3a02d9d5d0b3\") " Jul 7 00:01:37.731264 kubelet[2515]: I0707 00:01:37.730925 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b639298-27a5-41f1-a363-3a02d9d5d0b3-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8b639298-27a5-41f1-a363-3a02d9d5d0b3" (UID: "8b639298-27a5-41f1-a363-3a02d9d5d0b3"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 00:01:37.734630 kubelet[2515]: I0707 00:01:37.734446 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b639298-27a5-41f1-a363-3a02d9d5d0b3-kube-api-access-zjnnx" (OuterVolumeSpecName: "kube-api-access-zjnnx") pod "8b639298-27a5-41f1-a363-3a02d9d5d0b3" (UID: "8b639298-27a5-41f1-a363-3a02d9d5d0b3"). InnerVolumeSpecName "kube-api-access-zjnnx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 00:01:37.735082 kubelet[2515]: I0707 00:01:37.734991 2515 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b639298-27a5-41f1-a363-3a02d9d5d0b3-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8b639298-27a5-41f1-a363-3a02d9d5d0b3" (UID: "8b639298-27a5-41f1-a363-3a02d9d5d0b3"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 00:01:37.737573 systemd[1]: var-lib-kubelet-pods-8b639298\x2d27a5\x2d41f1\x2da363\x2d3a02d9d5d0b3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzjnnx.mount: Deactivated successfully. Jul 7 00:01:37.737800 systemd[1]: var-lib-kubelet-pods-8b639298\x2d27a5\x2d41f1\x2da363\x2d3a02d9d5d0b3-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 00:01:37.831210 kubelet[2515]: I0707 00:01:37.831150 2515 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b639298-27a5-41f1-a363-3a02d9d5d0b3-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 00:01:37.831210 kubelet[2515]: I0707 00:01:37.831192 2515 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zjnnx\" (UniqueName: \"kubernetes.io/projected/8b639298-27a5-41f1-a363-3a02d9d5d0b3-kube-api-access-zjnnx\") on node \"localhost\" DevicePath \"\"" Jul 7 00:01:37.831210 kubelet[2515]: I0707 00:01:37.831202 2515 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b639298-27a5-41f1-a363-3a02d9d5d0b3-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 00:01:37.875699 sshd[4021]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:37.879883 systemd[1]: sshd@7-10.0.0.146:22-10.0.0.1:46182.service: Deactivated successfully. Jul 7 00:01:37.881977 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 00:01:37.882667 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Jul 7 00:01:37.883704 systemd-logind[1455]: Removed session 8. Jul 7 00:01:37.942572 systemd-networkd[1404]: cali4549d94e80c: Link UP Jul 7 00:01:37.942949 systemd-networkd[1404]: cali4549d94e80c: Gained carrier Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.685 [INFO][4041] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.698 [INFO][4041] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0 calico-apiserver-7b996bf4d6- calico-apiserver 05a5d423-9d7b-48ae-ba84-e25c4fe860af 975 0 2025-07-07 00:01:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b996bf4d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b996bf4d6-mtmf7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4549d94e80c [] [] }} ContainerID="b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-mtmf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.698 [INFO][4041] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-mtmf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.726 [INFO][4070] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" HandleID="k8s-pod-network.b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.726 [INFO][4070] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" HandleID="k8s-pod-network.b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00052fe30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b996bf4d6-mtmf7", "timestamp":"2025-07-07 00:01:37.725995451 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.726 [INFO][4070] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.726 [INFO][4070] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.726 [INFO][4070] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.736 [INFO][4070] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" host="localhost" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.746 [INFO][4070] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.753 [INFO][4070] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.756 [INFO][4070] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.759 [INFO][4070] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.759 [INFO][4070] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" host="localhost" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.760 [INFO][4070] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064 Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.917 [INFO][4070] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" host="localhost" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.929 [INFO][4070] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" host="localhost" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.929 [INFO][4070] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" host="localhost" Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.929 [INFO][4070] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:37.960255 containerd[1465]: 2025-07-07 00:01:37.929 [INFO][4070] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" HandleID="k8s-pod-network.b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:37.961185 containerd[1465]: 2025-07-07 00:01:37.932 [INFO][4041] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-mtmf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0", GenerateName:"calico-apiserver-7b996bf4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"05a5d423-9d7b-48ae-ba84-e25c4fe860af", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b996bf4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b996bf4d6-mtmf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4549d94e80c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:37.961185 containerd[1465]: 2025-07-07 00:01:37.933 [INFO][4041] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-mtmf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:37.961185 containerd[1465]: 2025-07-07 00:01:37.933 [INFO][4041] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4549d94e80c ContainerID="b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-mtmf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:37.961185 containerd[1465]: 2025-07-07 00:01:37.943 [INFO][4041] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-mtmf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:37.961185 containerd[1465]: 2025-07-07 00:01:37.944 [INFO][4041] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-mtmf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0", GenerateName:"calico-apiserver-7b996bf4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"05a5d423-9d7b-48ae-ba84-e25c4fe860af", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b996bf4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064", Pod:"calico-apiserver-7b996bf4d6-mtmf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4549d94e80c", MAC:"3e:d0:6c:a9:dd:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:37.961185 containerd[1465]: 2025-07-07 00:01:37.955 [INFO][4041] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-mtmf7" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:37.990138 containerd[1465]: time="2025-07-07T00:01:37.989932544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:37.990138 containerd[1465]: time="2025-07-07T00:01:37.989991995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:37.990138 containerd[1465]: time="2025-07-07T00:01:37.990004088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:37.990323 containerd[1465]: time="2025-07-07T00:01:37.990090370Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:38.012551 systemd[1]: Started cri-containerd-b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064.scope - libcontainer container b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064. Jul 7 00:01:38.025207 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:01:38.048936 containerd[1465]: time="2025-07-07T00:01:38.048881879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b996bf4d6-mtmf7,Uid:05a5d423-9d7b-48ae-ba84-e25c4fe860af,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064\"" Jul 7 00:01:38.051185 containerd[1465]: time="2025-07-07T00:01:38.050918424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:01:38.051672 containerd[1465]: time="2025-07-07T00:01:38.051625612Z" level=info msg="StopPodSandbox for \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\"" Jul 7 00:01:38.051941 containerd[1465]: time="2025-07-07T00:01:38.051657832Z" level=info msg="StopPodSandbox for \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\"" Jul 7 00:01:38.063727 systemd[1]: Removed slice kubepods-besteffort-pod8b639298_27a5_41f1_a363_3a02d9d5d0b3.slice - libcontainer container kubepods-besteffort-pod8b639298_27a5_41f1_a363_3a02d9d5d0b3.slice. Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.103 [INFO][4157] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.103 [INFO][4157] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" iface="eth0" netns="/var/run/netns/cni-49a78aa4-2954-4e60-ce96-36b00e9849fb" Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.104 [INFO][4157] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" iface="eth0" netns="/var/run/netns/cni-49a78aa4-2954-4e60-ce96-36b00e9849fb" Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.104 [INFO][4157] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" iface="eth0" netns="/var/run/netns/cni-49a78aa4-2954-4e60-ce96-36b00e9849fb" Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.104 [INFO][4157] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.104 [INFO][4157] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.127 [INFO][4179] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" HandleID="k8s-pod-network.00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.127 [INFO][4179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.127 [INFO][4179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.134 [WARNING][4179] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" HandleID="k8s-pod-network.00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.135 [INFO][4179] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" HandleID="k8s-pod-network.00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.137 [INFO][4179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:38.143236 containerd[1465]: 2025-07-07 00:01:38.139 [INFO][4157] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:38.143758 containerd[1465]: time="2025-07-07T00:01:38.143454613Z" level=info msg="TearDown network for sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\" successfully" Jul 7 00:01:38.143758 containerd[1465]: time="2025-07-07T00:01:38.143485260Z" level=info msg="StopPodSandbox for \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\" returns successfully" Jul 7 00:01:38.144088 kubelet[2515]: E0707 00:01:38.144058 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:38.144715 containerd[1465]: time="2025-07-07T00:01:38.144690563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpvlw,Uid:ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece,Namespace:kube-system,Attempt:1,}" Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.107 [INFO][4166] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.107 [INFO][4166] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" iface="eth0" netns="/var/run/netns/cni-199c0559-a546-73c6-7327-118231b03570" Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.107 [INFO][4166] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" iface="eth0" netns="/var/run/netns/cni-199c0559-a546-73c6-7327-118231b03570" Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.108 [INFO][4166] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" iface="eth0" netns="/var/run/netns/cni-199c0559-a546-73c6-7327-118231b03570" Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.108 [INFO][4166] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.108 [INFO][4166] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.127 [INFO][4185] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" HandleID="k8s-pod-network.bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.127 [INFO][4185] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.137 [INFO][4185] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.143 [WARNING][4185] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" HandleID="k8s-pod-network.bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.143 [INFO][4185] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" HandleID="k8s-pod-network.bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.145 [INFO][4185] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:38.152442 containerd[1465]: 2025-07-07 00:01:38.148 [INFO][4166] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:38.152785 containerd[1465]: time="2025-07-07T00:01:38.152582985Z" level=info msg="TearDown network for sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\" successfully" Jul 7 00:01:38.152785 containerd[1465]: time="2025-07-07T00:01:38.152611559Z" level=info msg="StopPodSandbox for \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\" returns successfully" Jul 7 00:01:38.153294 containerd[1465]: time="2025-07-07T00:01:38.153263463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b996bf4d6-k76r4,Uid:6637e26b-2700-424e-b4eb-f1031d446d3b,Namespace:calico-apiserver,Attempt:1,}" Jul 7 00:01:38.257379 systemd-networkd[1404]: calid7e3012166e: Link UP Jul 7 00:01:38.258231 systemd-networkd[1404]: calid7e3012166e: Gained carrier Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.182 [INFO][4198] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.193 [INFO][4198] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0 coredns-674b8bbfcf- kube-system ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece 1007 0 2025-07-07 00:01:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-jpvlw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid7e3012166e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpvlw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jpvlw-" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.193 [INFO][4198] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpvlw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.219 [INFO][4226] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" HandleID="k8s-pod-network.4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.219 [INFO][4226] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" HandleID="k8s-pod-network.4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002defe0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-jpvlw", "timestamp":"2025-07-07 00:01:38.219269158 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.219 [INFO][4226] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.219 [INFO][4226] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.219 [INFO][4226] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.226 [INFO][4226] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" host="localhost" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.232 [INFO][4226] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.236 [INFO][4226] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.237 [INFO][4226] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.241 [INFO][4226] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.241 [INFO][4226] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" host="localhost" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.243 [INFO][4226] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51 Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.247 [INFO][4226] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" host="localhost" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.252 [INFO][4226] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" host="localhost" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.252 [INFO][4226] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" host="localhost" Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.252 [INFO][4226] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:38.271037 containerd[1465]: 2025-07-07 00:01:38.252 [INFO][4226] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" HandleID="k8s-pod-network.4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:38.271605 containerd[1465]: 2025-07-07 00:01:38.254 [INFO][4198] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpvlw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-jpvlw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7e3012166e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:38.271605 containerd[1465]: 2025-07-07 00:01:38.254 [INFO][4198] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpvlw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:38.271605 containerd[1465]: 2025-07-07 00:01:38.254 [INFO][4198] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7e3012166e ContainerID="4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpvlw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:38.271605 containerd[1465]: 2025-07-07 00:01:38.259 [INFO][4198] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpvlw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:38.271605 containerd[1465]: 2025-07-07 00:01:38.259 [INFO][4198] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpvlw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51", Pod:"coredns-674b8bbfcf-jpvlw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7e3012166e", MAC:"52:e7:fe:e6:dc:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:38.271605 containerd[1465]: 2025-07-07 00:01:38.268 [INFO][4198] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51" Namespace="kube-system" Pod="coredns-674b8bbfcf-jpvlw" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:38.290206 containerd[1465]: time="2025-07-07T00:01:38.290104624Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:38.290206 containerd[1465]: time="2025-07-07T00:01:38.290159076Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:38.290377 containerd[1465]: time="2025-07-07T00:01:38.290171148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:38.290377 containerd[1465]: time="2025-07-07T00:01:38.290252771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:38.316544 systemd[1]: Started cri-containerd-4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51.scope - libcontainer container 4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51. Jul 7 00:01:38.329699 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:01:38.356946 containerd[1465]: time="2025-07-07T00:01:38.356892898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jpvlw,Uid:ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece,Namespace:kube-system,Attempt:1,} returns sandbox id \"4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51\"" Jul 7 00:01:38.358141 kubelet[2515]: E0707 00:01:38.358118 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:38.364186 containerd[1465]: time="2025-07-07T00:01:38.364054948Z" level=info msg="CreateContainer within sandbox \"4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:01:38.367076 systemd-networkd[1404]: cali86f401837e5: Link UP Jul 7 00:01:38.368041 systemd-networkd[1404]: cali86f401837e5: Gained carrier Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.192 [INFO][4208] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.203 [INFO][4208] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0 calico-apiserver-7b996bf4d6- calico-apiserver 6637e26b-2700-424e-b4eb-f1031d446d3b 1008 0 2025-07-07 00:01:10 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b996bf4d6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7b996bf4d6-k76r4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali86f401837e5 [] [] }} ContainerID="ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-k76r4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.203 [INFO][4208] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-k76r4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.227 [INFO][4232] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" HandleID="k8s-pod-network.ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.227 [INFO][4232] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" HandleID="k8s-pod-network.ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7b996bf4d6-k76r4", "timestamp":"2025-07-07 00:01:38.227748822 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.228 [INFO][4232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.252 [INFO][4232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.252 [INFO][4232] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.328 [INFO][4232] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" host="localhost" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.335 [INFO][4232] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.340 [INFO][4232] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.342 [INFO][4232] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.344 [INFO][4232] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.344 [INFO][4232] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" host="localhost" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.346 [INFO][4232] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.353 [INFO][4232] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" host="localhost" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.360 [INFO][4232] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" host="localhost" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.360 [INFO][4232] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" host="localhost" Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.360 [INFO][4232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:38.388865 containerd[1465]: 2025-07-07 00:01:38.360 [INFO][4232] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" HandleID="k8s-pod-network.ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:38.389446 containerd[1465]: 2025-07-07 00:01:38.364 [INFO][4208] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-k76r4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0", GenerateName:"calico-apiserver-7b996bf4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6637e26b-2700-424e-b4eb-f1031d446d3b", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b996bf4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7b996bf4d6-k76r4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86f401837e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:38.389446 containerd[1465]: 2025-07-07 00:01:38.364 [INFO][4208] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-k76r4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:38.389446 containerd[1465]: 2025-07-07 00:01:38.364 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86f401837e5 ContainerID="ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-k76r4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:38.389446 containerd[1465]: 2025-07-07 00:01:38.366 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-k76r4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:38.389446 containerd[1465]: 2025-07-07 00:01:38.369 [INFO][4208] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-k76r4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0", GenerateName:"calico-apiserver-7b996bf4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6637e26b-2700-424e-b4eb-f1031d446d3b", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b996bf4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e", Pod:"calico-apiserver-7b996bf4d6-k76r4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86f401837e5", MAC:"72:d0:9e:99:d8:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:38.389446 containerd[1465]: 2025-07-07 00:01:38.385 [INFO][4208] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e" Namespace="calico-apiserver" Pod="calico-apiserver-7b996bf4d6-k76r4" WorkloadEndpoint="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:38.401191 containerd[1465]: time="2025-07-07T00:01:38.401122850Z" level=info msg="CreateContainer within sandbox \"4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1d01598357ac835dc2789a7cbb1a688c681599e0bc2bc7e3d8fb0194f4c81f0f\"" Jul 7 00:01:38.404767 containerd[1465]: time="2025-07-07T00:01:38.401801254Z" level=info msg="StartContainer for \"1d01598357ac835dc2789a7cbb1a688c681599e0bc2bc7e3d8fb0194f4c81f0f\"" Jul 7 00:01:38.413027 containerd[1465]: time="2025-07-07T00:01:38.412786313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:38.413027 containerd[1465]: time="2025-07-07T00:01:38.412859200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:38.413027 containerd[1465]: time="2025-07-07T00:01:38.412872124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:38.413341 containerd[1465]: time="2025-07-07T00:01:38.413245655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:38.432460 systemd[1]: Started cri-containerd-1d01598357ac835dc2789a7cbb1a688c681599e0bc2bc7e3d8fb0194f4c81f0f.scope - libcontainer container 1d01598357ac835dc2789a7cbb1a688c681599e0bc2bc7e3d8fb0194f4c81f0f. Jul 7 00:01:38.435700 systemd[1]: Started cri-containerd-ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e.scope - libcontainer container ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e. Jul 7 00:01:38.448293 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:01:38.467010 containerd[1465]: time="2025-07-07T00:01:38.466957083Z" level=info msg="StartContainer for \"1d01598357ac835dc2789a7cbb1a688c681599e0bc2bc7e3d8fb0194f4c81f0f\" returns successfully" Jul 7 00:01:38.473297 containerd[1465]: time="2025-07-07T00:01:38.473259569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b996bf4d6-k76r4,Uid:6637e26b-2700-424e-b4eb-f1031d446d3b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e\"" Jul 7 00:01:38.503155 kubelet[2515]: E0707 00:01:38.503105 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:38.516732 kubelet[2515]: I0707 00:01:38.516600 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jpvlw" podStartSLOduration=37.516435571 podStartE2EDuration="37.516435571s" podCreationTimestamp="2025-07-07 00:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:01:38.516409993 +0000 UTC m=+42.582076218" watchObservedRunningTime="2025-07-07 00:01:38.516435571 +0000 UTC m=+42.582037035" Jul 7 00:01:38.580162 systemd[1]: Created slice kubepods-besteffort-poda8d1894c_26d6_47e0_8732_b17c5daaf233.slice - libcontainer container kubepods-besteffort-poda8d1894c_26d6_47e0_8732_b17c5daaf233.slice. Jul 7 00:01:38.636519 systemd[1]: run-netns-cni\x2d49a78aa4\x2d2954\x2d4e60\x2dce96\x2d36b00e9849fb.mount: Deactivated successfully. Jul 7 00:01:38.636696 kubelet[2515]: I0707 00:01:38.636629 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8d1894c-26d6-47e0-8732-b17c5daaf233-whisker-ca-bundle\") pod \"whisker-7755974d9f-dzzdk\" (UID: \"a8d1894c-26d6-47e0-8732-b17c5daaf233\") " pod="calico-system/whisker-7755974d9f-dzzdk" Jul 7 00:01:38.636696 kubelet[2515]: I0707 00:01:38.636680 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khgdg\" (UniqueName: \"kubernetes.io/projected/a8d1894c-26d6-47e0-8732-b17c5daaf233-kube-api-access-khgdg\") pod \"whisker-7755974d9f-dzzdk\" (UID: \"a8d1894c-26d6-47e0-8732-b17c5daaf233\") " pod="calico-system/whisker-7755974d9f-dzzdk" Jul 7 00:01:38.636823 kubelet[2515]: I0707 00:01:38.636707 2515 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a8d1894c-26d6-47e0-8732-b17c5daaf233-whisker-backend-key-pair\") pod \"whisker-7755974d9f-dzzdk\" (UID: \"a8d1894c-26d6-47e0-8732-b17c5daaf233\") " pod="calico-system/whisker-7755974d9f-dzzdk" Jul 7 00:01:38.636986 systemd[1]: run-netns-cni\x2d199c0559\x2da546\x2d73c6\x2d7327\x2d118231b03570.mount: Deactivated successfully. Jul 7 00:01:38.891558 containerd[1465]: time="2025-07-07T00:01:38.891493741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7755974d9f-dzzdk,Uid:a8d1894c-26d6-47e0-8732-b17c5daaf233,Namespace:calico-system,Attempt:0,}" Jul 7 00:01:38.953380 kernel: bpftool[4519]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 00:01:39.194885 systemd-networkd[1404]: vxlan.calico: Link UP Jul 7 00:01:39.194894 systemd-networkd[1404]: vxlan.calico: Gained carrier Jul 7 00:01:39.280522 systemd-networkd[1404]: cali4549d94e80c: Gained IPv6LL Jul 7 00:01:39.510591 kubelet[2515]: E0707 00:01:39.510215 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:39.720711 systemd-networkd[1404]: calic188f6e26bf: Link UP Jul 7 00:01:39.721526 systemd-networkd[1404]: calic188f6e26bf: Gained carrier Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:38.958 [INFO][4491] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7755974d9f--dzzdk-eth0 whisker-7755974d9f- calico-system a8d1894c-26d6-47e0-8732-b17c5daaf233 1040 0 2025-07-07 00:01:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7755974d9f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7755974d9f-dzzdk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic188f6e26bf [] [] }} ContainerID="f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" Namespace="calico-system" Pod="whisker-7755974d9f-dzzdk" WorkloadEndpoint="localhost-k8s-whisker--7755974d9f--dzzdk-" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:38.958 [INFO][4491] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" Namespace="calico-system" Pod="whisker-7755974d9f-dzzdk" WorkloadEndpoint="localhost-k8s-whisker--7755974d9f--dzzdk-eth0" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:38.985 [INFO][4521] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" HandleID="k8s-pod-network.f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" Workload="localhost-k8s-whisker--7755974d9f--dzzdk-eth0" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:38.985 [INFO][4521] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" HandleID="k8s-pod-network.f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" Workload="localhost-k8s-whisker--7755974d9f--dzzdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138e70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7755974d9f-dzzdk", "timestamp":"2025-07-07 00:01:38.985493037 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:38.985 [INFO][4521] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:38.985 [INFO][4521] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:38.985 [INFO][4521] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:38.992 [INFO][4521] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" host="localhost" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:38.998 [INFO][4521] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:39.002 [INFO][4521] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:39.004 [INFO][4521] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:39.006 [INFO][4521] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:39.006 [INFO][4521] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" host="localhost" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:39.007 [INFO][4521] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2 Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:39.138 [INFO][4521] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" host="localhost" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:39.715 [INFO][4521] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" host="localhost" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:39.715 [INFO][4521] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" host="localhost" Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:39.715 [INFO][4521] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:39.898199 containerd[1465]: 2025-07-07 00:01:39.715 [INFO][4521] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" HandleID="k8s-pod-network.f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" Workload="localhost-k8s-whisker--7755974d9f--dzzdk-eth0" Jul 7 00:01:39.900296 containerd[1465]: 2025-07-07 00:01:39.718 [INFO][4491] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" Namespace="calico-system" Pod="whisker-7755974d9f-dzzdk" WorkloadEndpoint="localhost-k8s-whisker--7755974d9f--dzzdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7755974d9f--dzzdk-eth0", GenerateName:"whisker-7755974d9f-", Namespace:"calico-system", SelfLink:"", UID:"a8d1894c-26d6-47e0-8732-b17c5daaf233", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7755974d9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7755974d9f-dzzdk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic188f6e26bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:39.900296 containerd[1465]: 2025-07-07 00:01:39.718 [INFO][4491] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" Namespace="calico-system" Pod="whisker-7755974d9f-dzzdk" WorkloadEndpoint="localhost-k8s-whisker--7755974d9f--dzzdk-eth0" Jul 7 00:01:39.900296 containerd[1465]: 2025-07-07 00:01:39.718 [INFO][4491] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic188f6e26bf ContainerID="f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" Namespace="calico-system" Pod="whisker-7755974d9f-dzzdk" WorkloadEndpoint="localhost-k8s-whisker--7755974d9f--dzzdk-eth0" Jul 7 00:01:39.900296 containerd[1465]: 2025-07-07 00:01:39.722 [INFO][4491] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" Namespace="calico-system" Pod="whisker-7755974d9f-dzzdk" WorkloadEndpoint="localhost-k8s-whisker--7755974d9f--dzzdk-eth0" Jul 7 00:01:39.900296 containerd[1465]: 2025-07-07 00:01:39.723 [INFO][4491] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" Namespace="calico-system" Pod="whisker-7755974d9f-dzzdk" WorkloadEndpoint="localhost-k8s-whisker--7755974d9f--dzzdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7755974d9f--dzzdk-eth0", GenerateName:"whisker-7755974d9f-", Namespace:"calico-system", SelfLink:"", UID:"a8d1894c-26d6-47e0-8732-b17c5daaf233", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7755974d9f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2", Pod:"whisker-7755974d9f-dzzdk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic188f6e26bf", MAC:"fa:80:8a:0d:e0:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:39.900296 containerd[1465]: 2025-07-07 00:01:39.890 [INFO][4491] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2" Namespace="calico-system" Pod="whisker-7755974d9f-dzzdk" WorkloadEndpoint="localhost-k8s-whisker--7755974d9f--dzzdk-eth0" Jul 7 00:01:39.935425 containerd[1465]: time="2025-07-07T00:01:39.935161754Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:39.935425 containerd[1465]: time="2025-07-07T00:01:39.935219492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:39.935425 containerd[1465]: time="2025-07-07T00:01:39.935242635Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:39.936564 containerd[1465]: time="2025-07-07T00:01:39.936419224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:39.959818 systemd[1]: run-containerd-runc-k8s.io-f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2-runc.uXNvv9.mount: Deactivated successfully. Jul 7 00:01:39.972448 systemd[1]: Started cri-containerd-f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2.scope - libcontainer container f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2. Jul 7 00:01:39.986910 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:01:40.013151 containerd[1465]: time="2025-07-07T00:01:40.013106603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7755974d9f-dzzdk,Uid:a8d1894c-26d6-47e0-8732-b17c5daaf233,Namespace:calico-system,Attempt:0,} returns sandbox id \"f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2\"" Jul 7 00:01:40.054407 kubelet[2515]: I0707 00:01:40.054365 2515 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b639298-27a5-41f1-a363-3a02d9d5d0b3" path="/var/lib/kubelet/pods/8b639298-27a5-41f1-a363-3a02d9d5d0b3/volumes" Jul 7 00:01:40.305533 systemd-networkd[1404]: calid7e3012166e: Gained IPv6LL Jul 7 00:01:40.369848 systemd-networkd[1404]: cali86f401837e5: Gained IPv6LL Jul 7 00:01:40.513975 kubelet[2515]: E0707 00:01:40.513932 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:40.944550 systemd-networkd[1404]: vxlan.calico: Gained IPv6LL Jul 7 00:01:41.347534 containerd[1465]: time="2025-07-07T00:01:41.347475185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:41.348344 containerd[1465]: time="2025-07-07T00:01:41.348281148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 00:01:41.349887 containerd[1465]: time="2025-07-07T00:01:41.349853671Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:41.352797 containerd[1465]: time="2025-07-07T00:01:41.352756561Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:41.353464 containerd[1465]: time="2025-07-07T00:01:41.353437699Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 3.302459994s" Jul 7 00:01:41.353516 containerd[1465]: time="2025-07-07T00:01:41.353470751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:01:41.354873 containerd[1465]: time="2025-07-07T00:01:41.354832578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 00:01:41.362703 containerd[1465]: time="2025-07-07T00:01:41.362646228Z" level=info msg="CreateContainer within sandbox \"b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:01:41.381946 containerd[1465]: time="2025-07-07T00:01:41.381892762Z" level=info msg="CreateContainer within sandbox \"b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ebf65f0682239b335900ec697036e7b3538464439a592d8d582ce95eb185e60e\"" Jul 7 00:01:41.382804 containerd[1465]: time="2025-07-07T00:01:41.382718282Z" level=info msg="StartContainer for \"ebf65f0682239b335900ec697036e7b3538464439a592d8d582ce95eb185e60e\"" Jul 7 00:01:41.392563 systemd-networkd[1404]: calic188f6e26bf: Gained IPv6LL Jul 7 00:01:41.419598 systemd[1]: Started cri-containerd-ebf65f0682239b335900ec697036e7b3538464439a592d8d582ce95eb185e60e.scope - libcontainer container ebf65f0682239b335900ec697036e7b3538464439a592d8d582ce95eb185e60e. Jul 7 00:01:41.654199 containerd[1465]: time="2025-07-07T00:01:41.653967433Z" level=info msg="StartContainer for \"ebf65f0682239b335900ec697036e7b3538464439a592d8d582ce95eb185e60e\" returns successfully" Jul 7 00:01:41.660046 kubelet[2515]: E0707 00:01:41.659922 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:41.719942 containerd[1465]: time="2025-07-07T00:01:41.719882663Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:41.720951 containerd[1465]: time="2025-07-07T00:01:41.720906556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 00:01:41.723632 containerd[1465]: time="2025-07-07T00:01:41.723583912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 368.702292ms" Jul 7 00:01:41.723632 containerd[1465]: time="2025-07-07T00:01:41.723624168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 00:01:41.725453 containerd[1465]: time="2025-07-07T00:01:41.725399171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 00:01:41.730830 containerd[1465]: time="2025-07-07T00:01:41.730667612Z" level=info msg="CreateContainer within sandbox \"ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 00:01:41.812395 containerd[1465]: time="2025-07-07T00:01:41.812340226Z" level=info msg="CreateContainer within sandbox \"ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"aa6731ae8baf20a31c0737d59dc4581b3bb84b81a828235ad52561c33def51f4\"" Jul 7 00:01:41.813118 containerd[1465]: time="2025-07-07T00:01:41.813077029Z" level=info msg="StartContainer for \"aa6731ae8baf20a31c0737d59dc4581b3bb84b81a828235ad52561c33def51f4\"" Jul 7 00:01:41.844471 systemd[1]: Started cri-containerd-aa6731ae8baf20a31c0737d59dc4581b3bb84b81a828235ad52561c33def51f4.scope - libcontainer container aa6731ae8baf20a31c0737d59dc4581b3bb84b81a828235ad52561c33def51f4. Jul 7 00:01:41.941736 containerd[1465]: time="2025-07-07T00:01:41.941547534Z" level=info msg="StartContainer for \"aa6731ae8baf20a31c0737d59dc4581b3bb84b81a828235ad52561c33def51f4\" returns successfully" Jul 7 00:01:42.673960 kubelet[2515]: I0707 00:01:42.673888 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b996bf4d6-mtmf7" podStartSLOduration=29.3697854 podStartE2EDuration="32.673868252s" podCreationTimestamp="2025-07-07 00:01:10 +0000 UTC" firstStartedPulling="2025-07-07 00:01:38.050296606 +0000 UTC m=+42.115898070" lastFinishedPulling="2025-07-07 00:01:41.354379458 +0000 UTC m=+45.419980922" observedRunningTime="2025-07-07 00:01:42.673035669 +0000 UTC m=+46.738637133" watchObservedRunningTime="2025-07-07 00:01:42.673868252 +0000 UTC m=+46.739469716" Jul 7 00:01:42.894726 systemd[1]: Started sshd@8-10.0.0.146:22-10.0.0.1:46272.service - OpenSSH per-connection server daemon (10.0.0.1:46272). Jul 7 00:01:42.944801 sshd[4761]: Accepted publickey for core from 10.0.0.1 port 46272 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:01:42.946711 sshd[4761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:42.951861 systemd-logind[1455]: New session 9 of user core. Jul 7 00:01:42.959566 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 00:01:43.101106 sshd[4761]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:43.105209 systemd[1]: sshd@8-10.0.0.146:22-10.0.0.1:46272.service: Deactivated successfully. Jul 7 00:01:43.107295 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 00:01:43.108112 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Jul 7 00:01:43.109121 systemd-logind[1455]: Removed session 9. Jul 7 00:01:43.665650 kubelet[2515]: I0707 00:01:43.665436 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:01:43.768660 kubelet[2515]: I0707 00:01:43.768054 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7b996bf4d6-k76r4" podStartSLOduration=30.517957888 podStartE2EDuration="33.768032313s" podCreationTimestamp="2025-07-07 00:01:10 +0000 UTC" firstStartedPulling="2025-07-07 00:01:38.474499156 +0000 UTC m=+42.540100620" lastFinishedPulling="2025-07-07 00:01:41.724573581 +0000 UTC m=+45.790175045" observedRunningTime="2025-07-07 00:01:42.686438069 +0000 UTC m=+46.752039533" watchObservedRunningTime="2025-07-07 00:01:43.768032313 +0000 UTC m=+47.833633777" Jul 7 00:01:43.965801 containerd[1465]: time="2025-07-07T00:01:43.965643455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:43.969213 containerd[1465]: time="2025-07-07T00:01:43.969161950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 00:01:43.971492 containerd[1465]: time="2025-07-07T00:01:43.971249108Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:43.975123 containerd[1465]: time="2025-07-07T00:01:43.975081051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:43.975763 containerd[1465]: time="2025-07-07T00:01:43.975711304Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.250264705s" Jul 7 00:01:43.975867 containerd[1465]: time="2025-07-07T00:01:43.975767660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 00:01:43.983213 containerd[1465]: time="2025-07-07T00:01:43.983146431Z" level=info msg="CreateContainer within sandbox \"f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 00:01:44.003547 containerd[1465]: time="2025-07-07T00:01:44.002618021Z" level=info msg="CreateContainer within sandbox \"f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"adad8a145694c8e3bc16482b0ece4e741768e62c9bb98143008e08625f9bdea9\"" Jul 7 00:01:44.003287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount476073088.mount: Deactivated successfully. Jul 7 00:01:44.003951 containerd[1465]: time="2025-07-07T00:01:44.003562444Z" level=info msg="StartContainer for \"adad8a145694c8e3bc16482b0ece4e741768e62c9bb98143008e08625f9bdea9\"" Jul 7 00:01:44.044589 systemd[1]: Started cri-containerd-adad8a145694c8e3bc16482b0ece4e741768e62c9bb98143008e08625f9bdea9.scope - libcontainer container adad8a145694c8e3bc16482b0ece4e741768e62c9bb98143008e08625f9bdea9. Jul 7 00:01:44.112886 containerd[1465]: time="2025-07-07T00:01:44.112819950Z" level=info msg="StartContainer for \"adad8a145694c8e3bc16482b0ece4e741768e62c9bb98143008e08625f9bdea9\" returns successfully" Jul 7 00:01:44.131708 containerd[1465]: time="2025-07-07T00:01:44.131612284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 00:01:46.027994 kubelet[2515]: I0707 00:01:46.027916 2515 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 00:01:47.196775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1193025734.mount: Deactivated successfully. Jul 7 00:01:47.322328 containerd[1465]: time="2025-07-07T00:01:47.322250004Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:47.323726 containerd[1465]: time="2025-07-07T00:01:47.323680549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 00:01:47.324917 containerd[1465]: time="2025-07-07T00:01:47.324881302Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:47.327217 containerd[1465]: time="2025-07-07T00:01:47.327149329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:47.327828 containerd[1465]: time="2025-07-07T00:01:47.327792375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.196112775s" Jul 7 00:01:47.327828 containerd[1465]: time="2025-07-07T00:01:47.327822933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 00:01:47.332351 containerd[1465]: time="2025-07-07T00:01:47.332295697Z" level=info msg="CreateContainer within sandbox \"f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 00:01:47.345458 containerd[1465]: time="2025-07-07T00:01:47.345422881Z" level=info msg="CreateContainer within sandbox \"f26ef6c41e2b00a40e9074643d873f4e3fc8687c7e96657b4080432af986aed2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6802fb0f3bbf85a20f8ea99556fe0bacea7a8c48d52c59ceecba39725bf0a455\"" Jul 7 00:01:47.345932 containerd[1465]: time="2025-07-07T00:01:47.345908994Z" level=info msg="StartContainer for \"6802fb0f3bbf85a20f8ea99556fe0bacea7a8c48d52c59ceecba39725bf0a455\"" Jul 7 00:01:47.379511 systemd[1]: Started cri-containerd-6802fb0f3bbf85a20f8ea99556fe0bacea7a8c48d52c59ceecba39725bf0a455.scope - libcontainer container 6802fb0f3bbf85a20f8ea99556fe0bacea7a8c48d52c59ceecba39725bf0a455. Jul 7 00:01:47.729635 containerd[1465]: time="2025-07-07T00:01:47.729576002Z" level=info msg="StartContainer for \"6802fb0f3bbf85a20f8ea99556fe0bacea7a8c48d52c59ceecba39725bf0a455\" returns successfully" Jul 7 00:01:47.752131 kubelet[2515]: I0707 00:01:47.751763 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7755974d9f-dzzdk" podStartSLOduration=2.437768566 podStartE2EDuration="9.751742131s" podCreationTimestamp="2025-07-07 00:01:38 +0000 UTC" firstStartedPulling="2025-07-07 00:01:40.014625445 +0000 UTC m=+44.080226909" lastFinishedPulling="2025-07-07 00:01:47.32859901 +0000 UTC m=+51.394200474" observedRunningTime="2025-07-07 00:01:47.75087841 +0000 UTC m=+51.816479874" watchObservedRunningTime="2025-07-07 00:01:47.751742131 +0000 UTC m=+51.817343595" Jul 7 00:01:48.052624 containerd[1465]: time="2025-07-07T00:01:48.052365540Z" level=info msg="StopPodSandbox for \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\"" Jul 7 00:01:48.116439 systemd[1]: Started sshd@9-10.0.0.146:22-10.0.0.1:47498.service - OpenSSH per-connection server daemon (10.0.0.1:47498). Jul 7 00:01:48.157353 sshd[4903]: Accepted publickey for core from 10.0.0.1 port 47498 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:01:48.186894 sshd[4903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:48.191597 systemd-logind[1455]: New session 10 of user core. Jul 7 00:01:48.196527 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 00:01:48.666589 sshd[4903]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:48.670705 systemd[1]: sshd@9-10.0.0.146:22-10.0.0.1:47498.service: Deactivated successfully. Jul 7 00:01:48.672721 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 00:01:48.673295 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Jul 7 00:01:48.674200 systemd-logind[1455]: Removed session 10. Jul 7 00:01:49.052733 containerd[1465]: time="2025-07-07T00:01:49.052624878Z" level=info msg="StopPodSandbox for \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\"" Jul 7 00:01:49.053963 containerd[1465]: time="2025-07-07T00:01:49.053748767Z" level=info msg="StopPodSandbox for \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\"" Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.045 [INFO][4895] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.046 [INFO][4895] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" iface="eth0" netns="/var/run/netns/cni-3ddec431-492e-9b59-05bb-c215f7155abd" Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.046 [INFO][4895] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" iface="eth0" netns="/var/run/netns/cni-3ddec431-492e-9b59-05bb-c215f7155abd" Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.046 [INFO][4895] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" iface="eth0" netns="/var/run/netns/cni-3ddec431-492e-9b59-05bb-c215f7155abd" Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.046 [INFO][4895] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.046 [INFO][4895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.068 [INFO][4918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" HandleID="k8s-pod-network.7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.068 [INFO][4918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.068 [INFO][4918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.456 [WARNING][4918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" HandleID="k8s-pod-network.7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.456 [INFO][4918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" HandleID="k8s-pod-network.7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.749 [INFO][4918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:49.758918 containerd[1465]: 2025-07-07 00:01:49.753 [INFO][4895] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:49.759491 containerd[1465]: time="2025-07-07T00:01:49.759447774Z" level=info msg="TearDown network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\" successfully" Jul 7 00:01:49.759491 containerd[1465]: time="2025-07-07T00:01:49.759488642Z" level=info msg="StopPodSandbox for \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\" returns successfully" Jul 7 00:01:49.760752 containerd[1465]: time="2025-07-07T00:01:49.760668385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4s7sb,Uid:2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6,Namespace:calico-system,Attempt:1,}" Jul 7 00:01:49.763488 systemd[1]: run-netns-cni\x2d3ddec431\x2d492e\x2d9b59\x2d05bb\x2dc215f7155abd.mount: Deactivated successfully. Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.859 [INFO][4946] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.859 [INFO][4946] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" iface="eth0" netns="/var/run/netns/cni-ef32f46c-835b-32cd-a897-a9c9ec1bd027" Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.859 [INFO][4946] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" iface="eth0" netns="/var/run/netns/cni-ef32f46c-835b-32cd-a897-a9c9ec1bd027" Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.861 [INFO][4946] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" iface="eth0" netns="/var/run/netns/cni-ef32f46c-835b-32cd-a897-a9c9ec1bd027" Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.861 [INFO][4946] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.861 [INFO][4946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.883 [INFO][4969] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" HandleID="k8s-pod-network.8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.883 [INFO][4969] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.883 [INFO][4969] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.888 [WARNING][4969] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" HandleID="k8s-pod-network.8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.889 [INFO][4969] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" HandleID="k8s-pod-network.8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.890 [INFO][4969] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:49.895027 containerd[1465]: 2025-07-07 00:01:49.892 [INFO][4946] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:49.895027 containerd[1465]: time="2025-07-07T00:01:49.894803323Z" level=info msg="TearDown network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\" successfully" Jul 7 00:01:49.895027 containerd[1465]: time="2025-07-07T00:01:49.894831836Z" level=info msg="StopPodSandbox for \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\" returns successfully" Jul 7 00:01:49.895803 containerd[1465]: time="2025-07-07T00:01:49.895780337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cdfxn,Uid:70595641-eb8d-401b-b091-967934cddf8d,Namespace:calico-system,Attempt:1,}" Jul 7 00:01:49.898134 systemd[1]: run-netns-cni\x2def32f46c\x2d835b\x2d32cd\x2da897\x2da9c9ec1bd027.mount: Deactivated successfully. Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.859 [INFO][4945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.859 [INFO][4945] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" iface="eth0" netns="/var/run/netns/cni-2f9ba1b1-d328-db9a-82e0-ce6cc2563ebb" Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.860 [INFO][4945] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" iface="eth0" netns="/var/run/netns/cni-2f9ba1b1-d328-db9a-82e0-ce6cc2563ebb" Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.861 [INFO][4945] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" iface="eth0" netns="/var/run/netns/cni-2f9ba1b1-d328-db9a-82e0-ce6cc2563ebb" Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.861 [INFO][4945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.862 [INFO][4945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.890 [INFO][4971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" HandleID="k8s-pod-network.51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.890 [INFO][4971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.891 [INFO][4971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.899 [WARNING][4971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" HandleID="k8s-pod-network.51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.899 [INFO][4971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" HandleID="k8s-pod-network.51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.901 [INFO][4971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:49.906141 containerd[1465]: 2025-07-07 00:01:49.903 [INFO][4945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:49.907231 containerd[1465]: time="2025-07-07T00:01:49.906721417Z" level=info msg="TearDown network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\" successfully" Jul 7 00:01:49.907231 containerd[1465]: time="2025-07-07T00:01:49.906754159Z" level=info msg="StopPodSandbox for \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\" returns successfully" Jul 7 00:01:49.908352 containerd[1465]: time="2025-07-07T00:01:49.908329013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7b8cc4b5-cbtqq,Uid:00999ebd-f6ba-4e92-8b64-466bdf8e89f5,Namespace:calico-system,Attempt:1,}" Jul 7 00:01:49.909463 systemd[1]: run-netns-cni\x2d2f9ba1b1\x2dd328\x2ddb9a\x2d82e0\x2dce6cc2563ebb.mount: Deactivated successfully. Jul 7 00:01:50.632541 systemd-networkd[1404]: cali33cd740f1ca: Link UP Jul 7 00:01:50.632793 systemd-networkd[1404]: cali33cd740f1ca: Gained carrier Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.320 [INFO][4987] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4s7sb-eth0 csi-node-driver- calico-system 2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6 1152 0 2025-07-07 00:01:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4s7sb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali33cd740f1ca [] [] }} ContainerID="9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" Namespace="calico-system" Pod="csi-node-driver-4s7sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4s7sb-" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.321 [INFO][4987] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" Namespace="calico-system" Pod="csi-node-driver-4s7sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.353 [INFO][5032] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" HandleID="k8s-pod-network.9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.354 [INFO][5032] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" HandleID="k8s-pod-network.9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000131630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4s7sb", "timestamp":"2025-07-07 00:01:50.353773756 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.354 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.354 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.354 [INFO][5032] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.361 [INFO][5032] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" host="localhost" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.369 [INFO][5032] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.373 [INFO][5032] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.375 [INFO][5032] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.377 [INFO][5032] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.377 [INFO][5032] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" host="localhost" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.379 [INFO][5032] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084 Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.432 [INFO][5032] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" host="localhost" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.628 [INFO][5032] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" host="localhost" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.628 [INFO][5032] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" host="localhost" Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.628 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:50.694617 containerd[1465]: 2025-07-07 00:01:50.628 [INFO][5032] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" HandleID="k8s-pod-network.9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:50.695766 containerd[1465]: 2025-07-07 00:01:50.630 [INFO][4987] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" Namespace="calico-system" Pod="csi-node-driver-4s7sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4s7sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4s7sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4s7sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33cd740f1ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:50.695766 containerd[1465]: 2025-07-07 00:01:50.630 [INFO][4987] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" Namespace="calico-system" Pod="csi-node-driver-4s7sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:50.695766 containerd[1465]: 2025-07-07 00:01:50.630 [INFO][4987] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33cd740f1ca ContainerID="9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" Namespace="calico-system" Pod="csi-node-driver-4s7sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:50.695766 containerd[1465]: 2025-07-07 00:01:50.632 [INFO][4987] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" Namespace="calico-system" Pod="csi-node-driver-4s7sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:50.695766 containerd[1465]: 2025-07-07 00:01:50.633 [INFO][4987] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" Namespace="calico-system" Pod="csi-node-driver-4s7sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4s7sb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4s7sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6", ResourceVersion:"1152", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084", Pod:"csi-node-driver-4s7sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33cd740f1ca", MAC:"4a:1b:84:b2:c7:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:50.695766 containerd[1465]: 2025-07-07 00:01:50.689 [INFO][4987] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084" Namespace="calico-system" Pod="csi-node-driver-4s7sb" WorkloadEndpoint="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:50.740221 containerd[1465]: time="2025-07-07T00:01:50.740100423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:50.740221 containerd[1465]: time="2025-07-07T00:01:50.740171356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:50.740422 containerd[1465]: time="2025-07-07T00:01:50.740201814Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:50.741022 containerd[1465]: time="2025-07-07T00:01:50.740969394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:50.757288 systemd-networkd[1404]: calib40b0e1f6d1: Link UP Jul 7 00:01:50.758096 systemd-networkd[1404]: calib40b0e1f6d1: Gained carrier Jul 7 00:01:50.762760 systemd[1]: Started cri-containerd-9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084.scope - libcontainer container 9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084. Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.322 [INFO][4998] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0 goldmane-768f4c5c69- calico-system 70595641-eb8d-401b-b091-967934cddf8d 1159 0 2025-07-07 00:01:12 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-cdfxn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib40b0e1f6d1 [] [] }} ContainerID="fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdfxn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdfxn-" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.323 [INFO][4998] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdfxn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.354 [INFO][5034] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" HandleID="k8s-pod-network.fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.354 [INFO][5034] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" HandleID="k8s-pod-network.fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-cdfxn", "timestamp":"2025-07-07 00:01:50.354065633 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.354 [INFO][5034] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.628 [INFO][5034] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.628 [INFO][5034] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.638 [INFO][5034] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" host="localhost" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.718 [INFO][5034] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.723 [INFO][5034] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.725 [INFO][5034] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.730 [INFO][5034] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.730 [INFO][5034] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" host="localhost" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.732 [INFO][5034] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.739 [INFO][5034] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" host="localhost" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.746 [INFO][5034] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" host="localhost" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.746 [INFO][5034] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" host="localhost" Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.746 [INFO][5034] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:50.774642 containerd[1465]: 2025-07-07 00:01:50.746 [INFO][5034] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" HandleID="k8s-pod-network.fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:50.776264 containerd[1465]: 2025-07-07 00:01:50.750 [INFO][4998] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdfxn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"70595641-eb8d-401b-b091-967934cddf8d", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-cdfxn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib40b0e1f6d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:50.776264 containerd[1465]: 2025-07-07 00:01:50.751 [INFO][4998] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdfxn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:50.776264 containerd[1465]: 2025-07-07 00:01:50.751 [INFO][4998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib40b0e1f6d1 ContainerID="fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdfxn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:50.776264 containerd[1465]: 2025-07-07 00:01:50.758 [INFO][4998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdfxn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:50.776264 containerd[1465]: 2025-07-07 00:01:50.761 [INFO][4998] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdfxn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"70595641-eb8d-401b-b091-967934cddf8d", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f", Pod:"goldmane-768f4c5c69-cdfxn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib40b0e1f6d1", MAC:"a6:56:ab:2b:f5:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:50.776264 containerd[1465]: 2025-07-07 00:01:50.771 [INFO][4998] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f" Namespace="calico-system" Pod="goldmane-768f4c5c69-cdfxn" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:50.783612 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:01:50.798964 containerd[1465]: time="2025-07-07T00:01:50.798778545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:50.798964 containerd[1465]: time="2025-07-07T00:01:50.798830993Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:50.798964 containerd[1465]: time="2025-07-07T00:01:50.798865668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:50.799233 containerd[1465]: time="2025-07-07T00:01:50.799006873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:50.807503 containerd[1465]: time="2025-07-07T00:01:50.807382549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4s7sb,Uid:2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6,Namespace:calico-system,Attempt:1,} returns sandbox id \"9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084\"" Jul 7 00:01:50.809727 containerd[1465]: time="2025-07-07T00:01:50.809685390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 00:01:50.828575 systemd[1]: Started cri-containerd-fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f.scope - libcontainer container fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f. Jul 7 00:01:50.843902 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:01:50.844580 systemd-networkd[1404]: calif5b99ac512f: Link UP Jul 7 00:01:50.845226 systemd-networkd[1404]: calif5b99ac512f: Gained carrier Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.335 [INFO][5005] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0 calico-kube-controllers-5c7b8cc4b5- calico-system 00999ebd-f6ba-4e92-8b64-466bdf8e89f5 1158 0 2025-07-07 00:01:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c7b8cc4b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5c7b8cc4b5-cbtqq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif5b99ac512f [] [] }} ContainerID="d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" Namespace="calico-system" Pod="calico-kube-controllers-5c7b8cc4b5-cbtqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.335 [INFO][5005] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" Namespace="calico-system" Pod="calico-kube-controllers-5c7b8cc4b5-cbtqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.372 [INFO][5046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" HandleID="k8s-pod-network.d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.372 [INFO][5046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" HandleID="k8s-pod-network.d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001393a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5c7b8cc4b5-cbtqq", "timestamp":"2025-07-07 00:01:50.372701842 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.372 [INFO][5046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.746 [INFO][5046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.746 [INFO][5046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.756 [INFO][5046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" host="localhost" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.817 [INFO][5046] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.823 [INFO][5046] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.825 [INFO][5046] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.826 [INFO][5046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.827 [INFO][5046] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" host="localhost" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.828 [INFO][5046] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923 Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.831 [INFO][5046] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" host="localhost" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.838 [INFO][5046] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" host="localhost" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.838 [INFO][5046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" host="localhost" Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.838 [INFO][5046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:50.862711 containerd[1465]: 2025-07-07 00:01:50.838 [INFO][5046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" HandleID="k8s-pod-network.d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:50.863849 containerd[1465]: 2025-07-07 00:01:50.841 [INFO][5005] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" Namespace="calico-system" Pod="calico-kube-controllers-5c7b8cc4b5-cbtqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0", GenerateName:"calico-kube-controllers-5c7b8cc4b5-", Namespace:"calico-system", SelfLink:"", UID:"00999ebd-f6ba-4e92-8b64-466bdf8e89f5", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7b8cc4b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5c7b8cc4b5-cbtqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5b99ac512f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:50.863849 containerd[1465]: 2025-07-07 00:01:50.841 [INFO][5005] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" Namespace="calico-system" Pod="calico-kube-controllers-5c7b8cc4b5-cbtqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:50.863849 containerd[1465]: 2025-07-07 00:01:50.841 [INFO][5005] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5b99ac512f ContainerID="d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" Namespace="calico-system" Pod="calico-kube-controllers-5c7b8cc4b5-cbtqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:50.863849 containerd[1465]: 2025-07-07 00:01:50.845 [INFO][5005] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" Namespace="calico-system" Pod="calico-kube-controllers-5c7b8cc4b5-cbtqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:50.863849 containerd[1465]: 2025-07-07 00:01:50.848 [INFO][5005] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" Namespace="calico-system" Pod="calico-kube-controllers-5c7b8cc4b5-cbtqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0", GenerateName:"calico-kube-controllers-5c7b8cc4b5-", Namespace:"calico-system", SelfLink:"", UID:"00999ebd-f6ba-4e92-8b64-466bdf8e89f5", ResourceVersion:"1158", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7b8cc4b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923", Pod:"calico-kube-controllers-5c7b8cc4b5-cbtqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5b99ac512f", MAC:"16:d2:2b:a6:be:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:50.863849 containerd[1465]: 2025-07-07 00:01:50.858 [INFO][5005] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923" Namespace="calico-system" Pod="calico-kube-controllers-5c7b8cc4b5-cbtqq" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:50.883007 containerd[1465]: time="2025-07-07T00:01:50.881776869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-cdfxn,Uid:70595641-eb8d-401b-b091-967934cddf8d,Namespace:calico-system,Attempt:1,} returns sandbox id \"fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f\"" Jul 7 00:01:50.890797 containerd[1465]: time="2025-07-07T00:01:50.890691055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:50.891021 containerd[1465]: time="2025-07-07T00:01:50.890762119Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:50.891021 containerd[1465]: time="2025-07-07T00:01:50.890780984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:50.891021 containerd[1465]: time="2025-07-07T00:01:50.890877485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:50.918511 systemd[1]: Started cri-containerd-d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923.scope - libcontainer container d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923. Jul 7 00:01:50.930723 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:01:50.952842 containerd[1465]: time="2025-07-07T00:01:50.952796638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c7b8cc4b5-cbtqq,Uid:00999ebd-f6ba-4e92-8b64-466bdf8e89f5,Namespace:calico-system,Attempt:1,} returns sandbox id \"d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923\"" Jul 7 00:01:51.052381 containerd[1465]: time="2025-07-07T00:01:51.052334783Z" level=info msg="StopPodSandbox for \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\"" Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.093 [INFO][5225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.093 [INFO][5225] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" iface="eth0" netns="/var/run/netns/cni-db40610e-bf15-e2b4-0249-1ecdb9ffdda4" Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.094 [INFO][5225] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" iface="eth0" netns="/var/run/netns/cni-db40610e-bf15-e2b4-0249-1ecdb9ffdda4" Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.094 [INFO][5225] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" iface="eth0" netns="/var/run/netns/cni-db40610e-bf15-e2b4-0249-1ecdb9ffdda4" Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.094 [INFO][5225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.094 [INFO][5225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.118 [INFO][5234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" HandleID="k8s-pod-network.983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.118 [INFO][5234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.119 [INFO][5234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.125 [WARNING][5234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" HandleID="k8s-pod-network.983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.125 [INFO][5234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" HandleID="k8s-pod-network.983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.126 [INFO][5234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:51.132608 containerd[1465]: 2025-07-07 00:01:51.129 [INFO][5225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:51.133273 containerd[1465]: time="2025-07-07T00:01:51.133144905Z" level=info msg="TearDown network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\" successfully" Jul 7 00:01:51.133273 containerd[1465]: time="2025-07-07T00:01:51.133188366Z" level=info msg="StopPodSandbox for \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\" returns successfully" Jul 7 00:01:51.133630 kubelet[2515]: E0707 00:01:51.133604 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:51.134354 containerd[1465]: time="2025-07-07T00:01:51.134327002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvkj4,Uid:24b17fed-8f09-46ce-964b-ca73d8ad630a,Namespace:kube-system,Attempt:1,}" Jul 7 00:01:51.260098 systemd-networkd[1404]: cali404e8f6bc76: Link UP Jul 7 00:01:51.260450 systemd-networkd[1404]: cali404e8f6bc76: Gained carrier Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.181 [INFO][5242] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0 coredns-674b8bbfcf- kube-system 24b17fed-8f09-46ce-964b-ca73d8ad630a 1178 0 2025-07-07 00:01:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-dvkj4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali404e8f6bc76 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvkj4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvkj4-" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.181 [INFO][5242] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvkj4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.206 [INFO][5257] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" HandleID="k8s-pod-network.3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.207 [INFO][5257] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" HandleID="k8s-pod-network.3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7230), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-dvkj4", "timestamp":"2025-07-07 00:01:51.206646264 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.207 [INFO][5257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.207 [INFO][5257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.207 [INFO][5257] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.215 [INFO][5257] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" host="localhost" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.220 [INFO][5257] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.224 [INFO][5257] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.226 [INFO][5257] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.229 [INFO][5257] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.229 [INFO][5257] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" host="localhost" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.230 [INFO][5257] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130 Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.234 [INFO][5257] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" host="localhost" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.244 [INFO][5257] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" host="localhost" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.244 [INFO][5257] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" host="localhost" Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.244 [INFO][5257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:51.284988 containerd[1465]: 2025-07-07 00:01:51.244 [INFO][5257] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" HandleID="k8s-pod-network.3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:51.285573 containerd[1465]: 2025-07-07 00:01:51.254 [INFO][5242] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvkj4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"24b17fed-8f09-46ce-964b-ca73d8ad630a", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-dvkj4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali404e8f6bc76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:51.285573 containerd[1465]: 2025-07-07 00:01:51.254 [INFO][5242] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvkj4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:51.285573 containerd[1465]: 2025-07-07 00:01:51.254 [INFO][5242] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali404e8f6bc76 ContainerID="3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvkj4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:51.285573 containerd[1465]: 2025-07-07 00:01:51.258 [INFO][5242] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvkj4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:51.285573 containerd[1465]: 2025-07-07 00:01:51.259 [INFO][5242] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvkj4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"24b17fed-8f09-46ce-964b-ca73d8ad630a", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130", Pod:"coredns-674b8bbfcf-dvkj4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali404e8f6bc76", MAC:"5e:09:9e:c3:b5:50", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:51.285573 containerd[1465]: 2025-07-07 00:01:51.274 [INFO][5242] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130" Namespace="kube-system" Pod="coredns-674b8bbfcf-dvkj4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:51.311110 containerd[1465]: time="2025-07-07T00:01:51.310462881Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 00:01:51.311110 containerd[1465]: time="2025-07-07T00:01:51.311095619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 00:01:51.311110 containerd[1465]: time="2025-07-07T00:01:51.311113112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:51.311319 containerd[1465]: time="2025-07-07T00:01:51.311232926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 00:01:51.337464 systemd[1]: Started cri-containerd-3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130.scope - libcontainer container 3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130. Jul 7 00:01:51.351197 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 00:01:51.375413 containerd[1465]: time="2025-07-07T00:01:51.375364324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dvkj4,Uid:24b17fed-8f09-46ce-964b-ca73d8ad630a,Namespace:kube-system,Attempt:1,} returns sandbox id \"3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130\"" Jul 7 00:01:51.376152 kubelet[2515]: E0707 00:01:51.376120 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:51.409518 containerd[1465]: time="2025-07-07T00:01:51.409264360Z" level=info msg="CreateContainer within sandbox \"3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 00:01:51.431813 containerd[1465]: time="2025-07-07T00:01:51.431749779Z" level=info msg="CreateContainer within sandbox \"3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5400391e4f9cd97a174075243344be3edfcb57577983d5bf48a558699a058eba\"" Jul 7 00:01:51.432227 containerd[1465]: time="2025-07-07T00:01:51.432175879Z" level=info msg="StartContainer for \"5400391e4f9cd97a174075243344be3edfcb57577983d5bf48a558699a058eba\"" Jul 7 00:01:51.459447 systemd[1]: Started cri-containerd-5400391e4f9cd97a174075243344be3edfcb57577983d5bf48a558699a058eba.scope - libcontainer container 5400391e4f9cd97a174075243344be3edfcb57577983d5bf48a558699a058eba. Jul 7 00:01:51.485194 containerd[1465]: time="2025-07-07T00:01:51.485135435Z" level=info msg="StartContainer for \"5400391e4f9cd97a174075243344be3edfcb57577983d5bf48a558699a058eba\" returns successfully" Jul 7 00:01:51.745572 kubelet[2515]: E0707 00:01:51.745211 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:51.770531 systemd[1]: run-netns-cni\x2ddb40610e\x2dbf15\x2de2b4\x2d0249\x2d1ecdb9ffdda4.mount: Deactivated successfully. Jul 7 00:01:51.952566 systemd-networkd[1404]: cali33cd740f1ca: Gained IPv6LL Jul 7 00:01:52.056594 kubelet[2515]: I0707 00:01:52.055972 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dvkj4" podStartSLOduration=51.05595204 podStartE2EDuration="51.05595204s" podCreationTimestamp="2025-07-07 00:01:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 00:01:51.84342259 +0000 UTC m=+55.909024055" watchObservedRunningTime="2025-07-07 00:01:52.05595204 +0000 UTC m=+56.121553524" Jul 7 00:01:52.144509 systemd-networkd[1404]: calib40b0e1f6d1: Gained IPv6LL Jul 7 00:01:52.400498 systemd-networkd[1404]: calif5b99ac512f: Gained IPv6LL Jul 7 00:01:52.752137 kubelet[2515]: E0707 00:01:52.752003 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:52.810182 containerd[1465]: time="2025-07-07T00:01:52.810095829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:52.811017 containerd[1465]: time="2025-07-07T00:01:52.810947547Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 00:01:52.812424 containerd[1465]: time="2025-07-07T00:01:52.812354237Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:52.815248 containerd[1465]: time="2025-07-07T00:01:52.815172925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:52.815776 containerd[1465]: time="2025-07-07T00:01:52.815736783Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.006012571s" Jul 7 00:01:52.815776 containerd[1465]: time="2025-07-07T00:01:52.815771709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 00:01:52.816941 containerd[1465]: time="2025-07-07T00:01:52.816909053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 00:01:52.821511 containerd[1465]: time="2025-07-07T00:01:52.821474659Z" level=info msg="CreateContainer within sandbox \"9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 00:01:52.841637 containerd[1465]: time="2025-07-07T00:01:52.841576594Z" level=info msg="CreateContainer within sandbox \"9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0c28175a4d3210ec163b76cdc4db432aa7d8860c7f9934091ce3f1a6083dde42\"" Jul 7 00:01:52.842388 containerd[1465]: time="2025-07-07T00:01:52.842358310Z" level=info msg="StartContainer for \"0c28175a4d3210ec163b76cdc4db432aa7d8860c7f9934091ce3f1a6083dde42\"" Jul 7 00:01:52.873469 systemd[1]: Started cri-containerd-0c28175a4d3210ec163b76cdc4db432aa7d8860c7f9934091ce3f1a6083dde42.scope - libcontainer container 0c28175a4d3210ec163b76cdc4db432aa7d8860c7f9934091ce3f1a6083dde42. Jul 7 00:01:52.906127 containerd[1465]: time="2025-07-07T00:01:52.906047614Z" level=info msg="StartContainer for \"0c28175a4d3210ec163b76cdc4db432aa7d8860c7f9934091ce3f1a6083dde42\" returns successfully" Jul 7 00:01:52.976530 systemd-networkd[1404]: cali404e8f6bc76: Gained IPv6LL Jul 7 00:01:53.680437 systemd[1]: Started sshd@10-10.0.0.146:22-10.0.0.1:47508.service - OpenSSH per-connection server daemon (10.0.0.1:47508). Jul 7 00:01:53.729676 sshd[5398]: Accepted publickey for core from 10.0.0.1 port 47508 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:01:53.731458 sshd[5398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:53.736016 systemd-logind[1455]: New session 11 of user core. Jul 7 00:01:53.742430 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 00:01:53.755923 kubelet[2515]: E0707 00:01:53.755897 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:01:53.868088 sshd[5398]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:53.872142 systemd[1]: sshd@10-10.0.0.146:22-10.0.0.1:47508.service: Deactivated successfully. Jul 7 00:01:53.874200 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 00:01:53.874813 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Jul 7 00:01:53.875637 systemd-logind[1455]: Removed session 11. Jul 7 00:01:56.035481 containerd[1465]: time="2025-07-07T00:01:56.035432038Z" level=info msg="StopPodSandbox for \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\"" Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.071 [WARNING][5422] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"24b17fed-8f09-46ce-964b-ca73d8ad630a", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130", Pod:"coredns-674b8bbfcf-dvkj4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali404e8f6bc76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.071 [INFO][5422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.071 [INFO][5422] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" iface="eth0" netns="" Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.071 [INFO][5422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.071 [INFO][5422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.105 [INFO][5433] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" HandleID="k8s-pod-network.983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.106 [INFO][5433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.106 [INFO][5433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.112 [WARNING][5433] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" HandleID="k8s-pod-network.983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.112 [INFO][5433] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" HandleID="k8s-pod-network.983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.114 [INFO][5433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:56.121354 containerd[1465]: 2025-07-07 00:01:56.117 [INFO][5422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:56.122102 containerd[1465]: time="2025-07-07T00:01:56.121404937Z" level=info msg="TearDown network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\" successfully" Jul 7 00:01:56.122102 containerd[1465]: time="2025-07-07T00:01:56.121443669Z" level=info msg="StopPodSandbox for \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\" returns successfully" Jul 7 00:01:56.122188 containerd[1465]: time="2025-07-07T00:01:56.122164452Z" level=info msg="RemovePodSandbox for \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\"" Jul 7 00:01:56.124485 containerd[1465]: time="2025-07-07T00:01:56.124458946Z" level=info msg="Forcibly stopping sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\"" Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.157 [WARNING][5455] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"24b17fed-8f09-46ce-964b-ca73d8ad630a", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3e7ed5baceabee4700248acc4b3cb0a212f49ee1ee83fd1fd8e2a4ed95511130", Pod:"coredns-674b8bbfcf-dvkj4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali404e8f6bc76", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.158 [INFO][5455] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.158 [INFO][5455] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" iface="eth0" netns="" Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.158 [INFO][5455] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.158 [INFO][5455] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.180 [INFO][5464] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" HandleID="k8s-pod-network.983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.181 [INFO][5464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.181 [INFO][5464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.186 [WARNING][5464] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" HandleID="k8s-pod-network.983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.186 [INFO][5464] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" HandleID="k8s-pod-network.983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Workload="localhost-k8s-coredns--674b8bbfcf--dvkj4-eth0" Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.188 [INFO][5464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:56.193848 containerd[1465]: 2025-07-07 00:01:56.191 [INFO][5455] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf" Jul 7 00:01:56.194394 containerd[1465]: time="2025-07-07T00:01:56.193895143Z" level=info msg="TearDown network for sandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\" successfully" Jul 7 00:01:56.199750 containerd[1465]: time="2025-07-07T00:01:56.199724889Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:56.199805 containerd[1465]: time="2025-07-07T00:01:56.199781225Z" level=info msg="RemovePodSandbox \"983a87b3f62dc4c56af4eb8348966eacc568e3feef788e4ab5f0cb1692167daf\" returns successfully" Jul 7 00:01:56.200452 containerd[1465]: time="2025-07-07T00:01:56.200362455Z" level=info msg="StopPodSandbox for \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\"" Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.235 [WARNING][5481] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" WorkloadEndpoint="localhost-k8s-whisker--7d4f4f56d5--58mzt-eth0" Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.235 [INFO][5481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.235 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" iface="eth0" netns="" Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.235 [INFO][5481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.235 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.261 [INFO][5490] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" HandleID="k8s-pod-network.2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Workload="localhost-k8s-whisker--7d4f4f56d5--58mzt-eth0" Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.261 [INFO][5490] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.261 [INFO][5490] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.268 [WARNING][5490] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" HandleID="k8s-pod-network.2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Workload="localhost-k8s-whisker--7d4f4f56d5--58mzt-eth0" Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.268 [INFO][5490] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" HandleID="k8s-pod-network.2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Workload="localhost-k8s-whisker--7d4f4f56d5--58mzt-eth0" Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.270 [INFO][5490] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:56.277271 containerd[1465]: 2025-07-07 00:01:56.273 [INFO][5481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:56.277271 containerd[1465]: time="2025-07-07T00:01:56.277006213Z" level=info msg="TearDown network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\" successfully" Jul 7 00:01:56.277271 containerd[1465]: time="2025-07-07T00:01:56.277046238Z" level=info msg="StopPodSandbox for \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\" returns successfully" Jul 7 00:01:56.277747 containerd[1465]: time="2025-07-07T00:01:56.277604716Z" level=info msg="RemovePodSandbox for \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\"" Jul 7 00:01:56.277747 containerd[1465]: time="2025-07-07T00:01:56.277629833Z" level=info msg="Forcibly stopping sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\"" Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.320 [WARNING][5509] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" WorkloadEndpoint="localhost-k8s-whisker--7d4f4f56d5--58mzt-eth0" Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.320 [INFO][5509] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.320 [INFO][5509] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" iface="eth0" netns="" Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.320 [INFO][5509] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.320 [INFO][5509] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.349 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" HandleID="k8s-pod-network.2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Workload="localhost-k8s-whisker--7d4f4f56d5--58mzt-eth0" Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.349 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.349 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.404 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" HandleID="k8s-pod-network.2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Workload="localhost-k8s-whisker--7d4f4f56d5--58mzt-eth0" Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.404 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" HandleID="k8s-pod-network.2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Workload="localhost-k8s-whisker--7d4f4f56d5--58mzt-eth0" Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.406 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:56.412357 containerd[1465]: 2025-07-07 00:01:56.409 [INFO][5509] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0" Jul 7 00:01:56.412357 containerd[1465]: time="2025-07-07T00:01:56.412348214Z" level=info msg="TearDown network for sandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\" successfully" Jul 7 00:01:56.491917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006924520.mount: Deactivated successfully. Jul 7 00:01:57.307681 containerd[1465]: time="2025-07-07T00:01:57.307600820Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:57.307681 containerd[1465]: time="2025-07-07T00:01:57.307698885Z" level=info msg="RemovePodSandbox \"2d1469d3f36295c2fea41af305b24a36fa71fab8d4e2ce5d6dde3c0cba7574b0\" returns successfully" Jul 7 00:01:57.308382 containerd[1465]: time="2025-07-07T00:01:57.308332273Z" level=info msg="StopPodSandbox for \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\"" Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.595 [WARNING][5537] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"70595641-eb8d-401b-b091-967934cddf8d", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f", Pod:"goldmane-768f4c5c69-cdfxn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib40b0e1f6d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.595 [INFO][5537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.595 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" iface="eth0" netns="" Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.595 [INFO][5537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.595 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.617 [INFO][5546] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" HandleID="k8s-pod-network.8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.617 [INFO][5546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.617 [INFO][5546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.622 [WARNING][5546] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" HandleID="k8s-pod-network.8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.622 [INFO][5546] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" HandleID="k8s-pod-network.8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.623 [INFO][5546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:57.629108 containerd[1465]: 2025-07-07 00:01:57.626 [INFO][5537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:57.629735 containerd[1465]: time="2025-07-07T00:01:57.629146517Z" level=info msg="TearDown network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\" successfully" Jul 7 00:01:57.629735 containerd[1465]: time="2025-07-07T00:01:57.629175421Z" level=info msg="StopPodSandbox for \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\" returns successfully" Jul 7 00:01:57.629782 containerd[1465]: time="2025-07-07T00:01:57.629730847Z" level=info msg="RemovePodSandbox for \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\"" Jul 7 00:01:57.629782 containerd[1465]: time="2025-07-07T00:01:57.629760643Z" level=info msg="Forcibly stopping sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\"" Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.664 [WARNING][5563] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"70595641-eb8d-401b-b091-967934cddf8d", ResourceVersion:"1170", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f", Pod:"goldmane-768f4c5c69-cdfxn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib40b0e1f6d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.665 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.665 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" iface="eth0" netns="" Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.665 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.665 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.685 [INFO][5572] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" HandleID="k8s-pod-network.8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.685 [INFO][5572] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.685 [INFO][5572] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.690 [WARNING][5572] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" HandleID="k8s-pod-network.8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.690 [INFO][5572] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" HandleID="k8s-pod-network.8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Workload="localhost-k8s-goldmane--768f4c5c69--cdfxn-eth0" Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.691 [INFO][5572] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:57.697179 containerd[1465]: 2025-07-07 00:01:57.694 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42" Jul 7 00:01:57.699293 containerd[1465]: time="2025-07-07T00:01:57.697226880Z" level=info msg="TearDown network for sandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\" successfully" Jul 7 00:01:58.330911 containerd[1465]: time="2025-07-07T00:01:58.330840874Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:58.331532 containerd[1465]: time="2025-07-07T00:01:58.330953121Z" level=info msg="RemovePodSandbox \"8d4068c362a843dc3875c806d250d009a82c46e665ffc35444ec26891c2ebc42\" returns successfully" Jul 7 00:01:58.331569 containerd[1465]: time="2025-07-07T00:01:58.331530380Z" level=info msg="StopPodSandbox for \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\"" Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.379 [WARNING][5590] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0", GenerateName:"calico-apiserver-7b996bf4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"05a5d423-9d7b-48ae-ba84-e25c4fe860af", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b996bf4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064", Pod:"calico-apiserver-7b996bf4d6-mtmf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4549d94e80c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.379 [INFO][5590] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.379 [INFO][5590] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" iface="eth0" netns="" Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.379 [INFO][5590] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.379 [INFO][5590] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.399 [INFO][5598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" HandleID="k8s-pod-network.81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.399 [INFO][5598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.399 [INFO][5598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.404 [WARNING][5598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" HandleID="k8s-pod-network.81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.404 [INFO][5598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" HandleID="k8s-pod-network.81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.406 [INFO][5598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:58.411406 containerd[1465]: 2025-07-07 00:01:58.408 [INFO][5590] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:58.412524 containerd[1465]: time="2025-07-07T00:01:58.411444272Z" level=info msg="TearDown network for sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\" successfully" Jul 7 00:01:58.412524 containerd[1465]: time="2025-07-07T00:01:58.411474861Z" level=info msg="StopPodSandbox for \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\" returns successfully" Jul 7 00:01:58.412524 containerd[1465]: time="2025-07-07T00:01:58.412034816Z" level=info msg="RemovePodSandbox for \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\"" Jul 7 00:01:58.412524 containerd[1465]: time="2025-07-07T00:01:58.412081386Z" level=info msg="Forcibly stopping sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\"" Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.445 [WARNING][5617] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0", GenerateName:"calico-apiserver-7b996bf4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"05a5d423-9d7b-48ae-ba84-e25c4fe860af", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b996bf4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b916f3d47cfe0bb9b1a864e25fe0ccb22c17c7cbf2cfa00f7c34ced275a00064", Pod:"calico-apiserver-7b996bf4d6-mtmf7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4549d94e80c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.445 [INFO][5617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.445 [INFO][5617] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" iface="eth0" netns="" Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.445 [INFO][5617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.445 [INFO][5617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.463 [INFO][5625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" HandleID="k8s-pod-network.81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.463 [INFO][5625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.463 [INFO][5625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.468 [WARNING][5625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" HandleID="k8s-pod-network.81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.468 [INFO][5625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" HandleID="k8s-pod-network.81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--mtmf7-eth0" Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.469 [INFO][5625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:58.474927 containerd[1465]: 2025-07-07 00:01:58.472 [INFO][5617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098" Jul 7 00:01:58.475383 containerd[1465]: time="2025-07-07T00:01:58.474970070Z" level=info msg="TearDown network for sandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\" successfully" Jul 7 00:01:58.746565 containerd[1465]: time="2025-07-07T00:01:58.746451768Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:58.746565 containerd[1465]: time="2025-07-07T00:01:58.746526022Z" level=info msg="RemovePodSandbox \"81cf64ea4ebde3756e1797c85babcb75636b94ee7fc753fcc821476fa9154098\" returns successfully" Jul 7 00:01:58.747393 containerd[1465]: time="2025-07-07T00:01:58.747346402Z" level=info msg="StopPodSandbox for \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\"" Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.799 [WARNING][5647] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0", GenerateName:"calico-apiserver-7b996bf4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6637e26b-2700-424e-b4eb-f1031d446d3b", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b996bf4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e", Pod:"calico-apiserver-7b996bf4d6-k76r4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86f401837e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.799 [INFO][5647] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.799 [INFO][5647] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" iface="eth0" netns="" Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.799 [INFO][5647] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.799 [INFO][5647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.822 [INFO][5656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" HandleID="k8s-pod-network.bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.822 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.822 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.829 [WARNING][5656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" HandleID="k8s-pod-network.bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.829 [INFO][5656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" HandleID="k8s-pod-network.bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.830 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:58.836773 containerd[1465]: 2025-07-07 00:01:58.833 [INFO][5647] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:58.837212 containerd[1465]: time="2025-07-07T00:01:58.836814386Z" level=info msg="TearDown network for sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\" successfully" Jul 7 00:01:58.837212 containerd[1465]: time="2025-07-07T00:01:58.836844314Z" level=info msg="StopPodSandbox for \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\" returns successfully" Jul 7 00:01:58.837484 containerd[1465]: time="2025-07-07T00:01:58.837337219Z" level=info msg="RemovePodSandbox for \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\"" Jul 7 00:01:58.837484 containerd[1465]: time="2025-07-07T00:01:58.837371515Z" level=info msg="Forcibly stopping sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\"" Jul 7 00:01:58.881424 systemd[1]: Started sshd@11-10.0.0.146:22-10.0.0.1:42574.service - OpenSSH per-connection server daemon (10.0.0.1:42574). Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.891 [WARNING][5673] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0", GenerateName:"calico-apiserver-7b996bf4d6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6637e26b-2700-424e-b4eb-f1031d446d3b", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b996bf4d6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea1c0062b7d0ac3f39fd0ed2b27bb2a819d0c0ab525e29becc9787a1234a764e", Pod:"calico-apiserver-7b996bf4d6-k76r4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali86f401837e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.891 [INFO][5673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.891 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" iface="eth0" netns="" Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.891 [INFO][5673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.891 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.924 [INFO][5683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" HandleID="k8s-pod-network.bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.925 [INFO][5683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.925 [INFO][5683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.932 [WARNING][5683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" HandleID="k8s-pod-network.bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.932 [INFO][5683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" HandleID="k8s-pod-network.bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Workload="localhost-k8s-calico--apiserver--7b996bf4d6--k76r4-eth0" Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.934 [INFO][5683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:58.940634 containerd[1465]: 2025-07-07 00:01:58.937 [INFO][5673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90" Jul 7 00:01:58.941137 containerd[1465]: time="2025-07-07T00:01:58.940678996Z" level=info msg="TearDown network for sandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\" successfully" Jul 7 00:01:58.945466 containerd[1465]: time="2025-07-07T00:01:58.945282055Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:58.945466 containerd[1465]: time="2025-07-07T00:01:58.945367790Z" level=info msg="RemovePodSandbox \"bc4bb0584103bca9064c472e9ad0ebf361fed49b55da551a40803d43c94a7d90\" returns successfully" Jul 7 00:01:58.946040 containerd[1465]: time="2025-07-07T00:01:58.946005436Z" level=info msg="StopPodSandbox for \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\"" Jul 7 00:01:58.957503 sshd[5681]: Accepted publickey for core from 10.0.0.1 port 42574 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:01:58.960088 sshd[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:58.968660 systemd-logind[1455]: New session 12 of user core. Jul 7 00:01:58.977528 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 00:01:58.977892 containerd[1465]: time="2025-07-07T00:01:58.975912375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.977892 containerd[1465]: time="2025-07-07T00:01:58.976764906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 00:01:58.979061 containerd[1465]: time="2025-07-07T00:01:58.978102789Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.982152 containerd[1465]: time="2025-07-07T00:01:58.982105454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:01:58.983584 containerd[1465]: time="2025-07-07T00:01:58.983405804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 6.166461594s" Jul 7 00:01:58.983584 containerd[1465]: time="2025-07-07T00:01:58.983453877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 00:01:58.986616 containerd[1465]: time="2025-07-07T00:01:58.986579594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 00:01:58.990199 containerd[1465]: time="2025-07-07T00:01:58.990150863Z" level=info msg="CreateContainer within sandbox \"fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 00:01:59.003011 containerd[1465]: time="2025-07-07T00:01:59.002782003Z" level=info msg="CreateContainer within sandbox \"fbfa5e235e97ac92591b2a2c316c39a1e2a520b7d56a9160daa5a544e3074f3f\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b71b02b63f07dfcbd7a75f40cac9ff7ae74bb68b1e9b1d1743e15184a4edfc35\"" Jul 7 00:01:59.003847 containerd[1465]: time="2025-07-07T00:01:59.003602782Z" level=info msg="StartContainer for \"b71b02b63f07dfcbd7a75f40cac9ff7ae74bb68b1e9b1d1743e15184a4edfc35\"" Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:58.988 [WARNING][5701] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0", GenerateName:"calico-kube-controllers-5c7b8cc4b5-", Namespace:"calico-system", SelfLink:"", UID:"00999ebd-f6ba-4e92-8b64-466bdf8e89f5", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7b8cc4b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923", Pod:"calico-kube-controllers-5c7b8cc4b5-cbtqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5b99ac512f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:58.989 [INFO][5701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:58.989 [INFO][5701] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" iface="eth0" netns="" Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:58.989 [INFO][5701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:58.989 [INFO][5701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:59.013 [INFO][5715] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" HandleID="k8s-pod-network.51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:59.013 [INFO][5715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:59.013 [INFO][5715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:59.019 [WARNING][5715] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" HandleID="k8s-pod-network.51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:59.020 [INFO][5715] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" HandleID="k8s-pod-network.51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:59.023 [INFO][5715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:59.030793 containerd[1465]: 2025-07-07 00:01:59.026 [INFO][5701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:59.032108 containerd[1465]: time="2025-07-07T00:01:59.030848637Z" level=info msg="TearDown network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\" successfully" Jul 7 00:01:59.032108 containerd[1465]: time="2025-07-07T00:01:59.030882673Z" level=info msg="StopPodSandbox for \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\" returns successfully" Jul 7 00:01:59.034785 containerd[1465]: time="2025-07-07T00:01:59.034127094Z" level=info msg="RemovePodSandbox for \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\"" Jul 7 00:01:59.034785 containerd[1465]: time="2025-07-07T00:01:59.034164005Z" level=info msg="Forcibly stopping sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\"" Jul 7 00:01:59.063462 systemd[1]: Started cri-containerd-b71b02b63f07dfcbd7a75f40cac9ff7ae74bb68b1e9b1d1743e15184a4edfc35.scope - libcontainer container b71b02b63f07dfcbd7a75f40cac9ff7ae74bb68b1e9b1d1743e15184a4edfc35. Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.101 [WARNING][5748] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0", GenerateName:"calico-kube-controllers-5c7b8cc4b5-", Namespace:"calico-system", SelfLink:"", UID:"00999ebd-f6ba-4e92-8b64-466bdf8e89f5", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c7b8cc4b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923", Pod:"calico-kube-controllers-5c7b8cc4b5-cbtqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif5b99ac512f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.101 [INFO][5748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.101 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" iface="eth0" netns="" Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.101 [INFO][5748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.101 [INFO][5748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.128 [INFO][5773] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" HandleID="k8s-pod-network.51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.128 [INFO][5773] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.128 [INFO][5773] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.134 [WARNING][5773] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" HandleID="k8s-pod-network.51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.134 [INFO][5773] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" HandleID="k8s-pod-network.51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Workload="localhost-k8s-calico--kube--controllers--5c7b8cc4b5--cbtqq-eth0" Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.135 [INFO][5773] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:59.141412 containerd[1465]: 2025-07-07 00:01:59.138 [INFO][5748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106" Jul 7 00:01:59.142885 containerd[1465]: time="2025-07-07T00:01:59.141440720Z" level=info msg="TearDown network for sandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\" successfully" Jul 7 00:01:59.389276 containerd[1465]: time="2025-07-07T00:01:59.389219269Z" level=info msg="StartContainer for \"b71b02b63f07dfcbd7a75f40cac9ff7ae74bb68b1e9b1d1743e15184a4edfc35\" returns successfully" Jul 7 00:01:59.443833 containerd[1465]: time="2025-07-07T00:01:59.443781014Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:59.443978 containerd[1465]: time="2025-07-07T00:01:59.443867661Z" level=info msg="RemovePodSandbox \"51f0cf942f2ccd880745b654f71048d76ff8ecb83622fe00b28c03b0ff6b3106\" returns successfully" Jul 7 00:01:59.444381 containerd[1465]: time="2025-07-07T00:01:59.444322322Z" level=info msg="StopPodSandbox for \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\"" Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.481 [WARNING][5807] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51", Pod:"coredns-674b8bbfcf-jpvlw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7e3012166e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.481 [INFO][5807] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.481 [INFO][5807] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" iface="eth0" netns="" Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.481 [INFO][5807] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.481 [INFO][5807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.512 [INFO][5816] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" HandleID="k8s-pod-network.00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.512 [INFO][5816] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.512 [INFO][5816] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.519 [WARNING][5816] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" HandleID="k8s-pod-network.00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.519 [INFO][5816] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" HandleID="k8s-pod-network.00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.522 [INFO][5816] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:59.528868 containerd[1465]: 2025-07-07 00:01:59.526 [INFO][5807] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:59.529373 containerd[1465]: time="2025-07-07T00:01:59.528908711Z" level=info msg="TearDown network for sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\" successfully" Jul 7 00:01:59.529373 containerd[1465]: time="2025-07-07T00:01:59.528941745Z" level=info msg="StopPodSandbox for \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\" returns successfully" Jul 7 00:01:59.529615 containerd[1465]: time="2025-07-07T00:01:59.529534292Z" level=info msg="RemovePodSandbox for \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\"" Jul 7 00:01:59.529615 containerd[1465]: time="2025-07-07T00:01:59.529563759Z" level=info msg="Forcibly stopping sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\"" Jul 7 00:01:59.532961 sshd[5681]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:59.542013 systemd[1]: sshd@11-10.0.0.146:22-10.0.0.1:42574.service: Deactivated successfully. Jul 7 00:01:59.544627 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 00:01:59.546869 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Jul 7 00:01:59.558653 systemd[1]: Started sshd@12-10.0.0.146:22-10.0.0.1:42590.service - OpenSSH per-connection server daemon (10.0.0.1:42590). Jul 7 00:01:59.560730 systemd-logind[1455]: Removed session 12. Jul 7 00:01:59.588894 sshd[5849]: Accepted publickey for core from 10.0.0.1 port 42590 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:01:59.590683 sshd[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:59.596964 systemd-logind[1455]: New session 13 of user core. Jul 7 00:01:59.601455 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.566 [WARNING][5840] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ef0cbce2-a46c-4e9a-95aa-18c3fe6c5ece", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4c24bbd46613eaf9ce2e6225156a42efbe61b64316a73e7aff66e201f82cae51", Pod:"coredns-674b8bbfcf-jpvlw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid7e3012166e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.566 [INFO][5840] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.566 [INFO][5840] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" iface="eth0" netns="" Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.566 [INFO][5840] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.566 [INFO][5840] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.593 [INFO][5852] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" HandleID="k8s-pod-network.00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.594 [INFO][5852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.594 [INFO][5852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.599 [WARNING][5852] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" HandleID="k8s-pod-network.00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.599 [INFO][5852] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" HandleID="k8s-pod-network.00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Workload="localhost-k8s-coredns--674b8bbfcf--jpvlw-eth0" Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.601 [INFO][5852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:59.607409 containerd[1465]: 2025-07-07 00:01:59.604 [INFO][5840] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07" Jul 7 00:01:59.607984 containerd[1465]: time="2025-07-07T00:01:59.607464445Z" level=info msg="TearDown network for sandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\" successfully" Jul 7 00:01:59.612019 containerd[1465]: time="2025-07-07T00:01:59.611962322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:59.612113 containerd[1465]: time="2025-07-07T00:01:59.612090239Z" level=info msg="RemovePodSandbox \"00570eb648b4f248ba2f0f89caf8b728d56300436ffb131d42122ca88d197c07\" returns successfully" Jul 7 00:01:59.612636 containerd[1465]: time="2025-07-07T00:01:59.612609565Z" level=info msg="StopPodSandbox for \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\"" Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.649 [WARNING][5872] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4s7sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084", Pod:"csi-node-driver-4s7sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33cd740f1ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.650 [INFO][5872] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.650 [INFO][5872] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" iface="eth0" netns="" Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.650 [INFO][5872] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.650 [INFO][5872] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.674 [INFO][5883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" HandleID="k8s-pod-network.7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.675 [INFO][5883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.675 [INFO][5883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.683 [WARNING][5883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" HandleID="k8s-pod-network.7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.684 [INFO][5883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" HandleID="k8s-pod-network.7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.685 [INFO][5883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:59.692146 containerd[1465]: 2025-07-07 00:01:59.688 [INFO][5872] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:59.692146 containerd[1465]: time="2025-07-07T00:01:59.692076534Z" level=info msg="TearDown network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\" successfully" Jul 7 00:01:59.692146 containerd[1465]: time="2025-07-07T00:01:59.692108786Z" level=info msg="StopPodSandbox for \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\" returns successfully" Jul 7 00:01:59.693077 containerd[1465]: time="2025-07-07T00:01:59.692855952Z" level=info msg="RemovePodSandbox for \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\"" Jul 7 00:01:59.693077 containerd[1465]: time="2025-07-07T00:01:59.692916449Z" level=info msg="Forcibly stopping sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\"" Jul 7 00:01:59.785623 sshd[5849]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.738 [WARNING][5907] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4s7sb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2fd0fbaa-66a3-48fe-88c7-fc37b194a8d6", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 0, 1, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084", Pod:"csi-node-driver-4s7sb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali33cd740f1ca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.738 [INFO][5907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.738 [INFO][5907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" iface="eth0" netns="" Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.738 [INFO][5907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.738 [INFO][5907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.763 [INFO][5915] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" HandleID="k8s-pod-network.7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.763 [INFO][5915] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.763 [INFO][5915] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.768 [WARNING][5915] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" HandleID="k8s-pod-network.7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.768 [INFO][5915] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" HandleID="k8s-pod-network.7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Workload="localhost-k8s-csi--node--driver--4s7sb-eth0" Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.774 [INFO][5915] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 00:01:59.793910 containerd[1465]: 2025-07-07 00:01:59.787 [INFO][5907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055" Jul 7 00:01:59.795807 containerd[1465]: time="2025-07-07T00:01:59.794712855Z" level=info msg="TearDown network for sandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\" successfully" Jul 7 00:01:59.795070 systemd[1]: sshd@12-10.0.0.146:22-10.0.0.1:42590.service: Deactivated successfully. Jul 7 00:01:59.799475 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 00:01:59.800787 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Jul 7 00:01:59.812472 systemd[1]: Started sshd@13-10.0.0.146:22-10.0.0.1:42594.service - OpenSSH per-connection server daemon (10.0.0.1:42594). Jul 7 00:01:59.819225 systemd-logind[1455]: Removed session 13. Jul 7 00:01:59.846630 sshd[5934]: Accepted publickey for core from 10.0.0.1 port 42594 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:01:59.848627 sshd[5934]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:01:59.853200 systemd-logind[1455]: New session 14 of user core. Jul 7 00:01:59.859461 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 00:01:59.933070 containerd[1465]: time="2025-07-07T00:01:59.932979442Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 00:01:59.933528 containerd[1465]: time="2025-07-07T00:01:59.933296515Z" level=info msg="RemovePodSandbox \"7d96afc0b51c825b34f2195b4143ad27110788ae2eab7bcc71c3998ed53d6055\" returns successfully" Jul 7 00:01:59.986661 sshd[5934]: pam_unix(sshd:session): session closed for user core Jul 7 00:01:59.991276 systemd[1]: sshd@13-10.0.0.146:22-10.0.0.1:42594.service: Deactivated successfully. Jul 7 00:01:59.993524 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 00:01:59.994233 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Jul 7 00:01:59.995241 systemd-logind[1455]: Removed session 14. Jul 7 00:02:00.866047 systemd[1]: run-containerd-runc-k8s.io-b71b02b63f07dfcbd7a75f40cac9ff7ae74bb68b1e9b1d1743e15184a4edfc35-runc.SXCAYG.mount: Deactivated successfully. Jul 7 00:02:01.142665 kubelet[2515]: I0707 00:02:01.142485 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-cdfxn" podStartSLOduration=41.041420941 podStartE2EDuration="49.142456896s" podCreationTimestamp="2025-07-07 00:01:12 +0000 UTC" firstStartedPulling="2025-07-07 00:01:50.884547518 +0000 UTC m=+54.950148982" lastFinishedPulling="2025-07-07 00:01:58.985583473 +0000 UTC m=+63.051184937" observedRunningTime="2025-07-07 00:01:59.811778743 +0000 UTC m=+63.877380207" watchObservedRunningTime="2025-07-07 00:02:01.142456896 +0000 UTC m=+65.208058350" Jul 7 00:02:02.804617 containerd[1465]: time="2025-07-07T00:02:02.804496578Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:02.805573 containerd[1465]: time="2025-07-07T00:02:02.805542828Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 00:02:02.808569 containerd[1465]: time="2025-07-07T00:02:02.808537430Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:02.811419 containerd[1465]: time="2025-07-07T00:02:02.811380892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:02.812100 containerd[1465]: time="2025-07-07T00:02:02.812074621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.825467905s" Jul 7 00:02:02.812169 containerd[1465]: time="2025-07-07T00:02:02.812104228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 00:02:02.813444 containerd[1465]: time="2025-07-07T00:02:02.813416121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 00:02:02.840363 containerd[1465]: time="2025-07-07T00:02:02.840294683Z" level=info msg="CreateContainer within sandbox \"d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 00:02:02.855153 containerd[1465]: time="2025-07-07T00:02:02.855094701Z" level=info msg="CreateContainer within sandbox \"d18baa4f866584f1f21851c47b81924c3c9e7d7677a8f87a3b2a04d30ce81923\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5255772664ba1d08d2628c1c22e5509695410cbd339272fb201a14d4aa2220a0\"" Jul 7 00:02:02.856227 containerd[1465]: time="2025-07-07T00:02:02.856199475Z" level=info msg="StartContainer for \"5255772664ba1d08d2628c1c22e5509695410cbd339272fb201a14d4aa2220a0\"" Jul 7 00:02:02.888494 systemd[1]: Started cri-containerd-5255772664ba1d08d2628c1c22e5509695410cbd339272fb201a14d4aa2220a0.scope - libcontainer container 5255772664ba1d08d2628c1c22e5509695410cbd339272fb201a14d4aa2220a0. Jul 7 00:02:02.960388 containerd[1465]: time="2025-07-07T00:02:02.953348628Z" level=info msg="StartContainer for \"5255772664ba1d08d2628c1c22e5509695410cbd339272fb201a14d4aa2220a0\" returns successfully" Jul 7 00:02:04.102274 kubelet[2515]: I0707 00:02:04.102027 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c7b8cc4b5-cbtqq" podStartSLOduration=39.243227588 podStartE2EDuration="51.102005967s" podCreationTimestamp="2025-07-07 00:01:13 +0000 UTC" firstStartedPulling="2025-07-07 00:01:50.954081398 +0000 UTC m=+55.019682862" lastFinishedPulling="2025-07-07 00:02:02.812859777 +0000 UTC m=+66.878461241" observedRunningTime="2025-07-07 00:02:04.101038262 +0000 UTC m=+68.166639726" watchObservedRunningTime="2025-07-07 00:02:04.102005967 +0000 UTC m=+68.167607431" Jul 7 00:02:04.998613 systemd[1]: Started sshd@14-10.0.0.146:22-10.0.0.1:42602.service - OpenSSH per-connection server daemon (10.0.0.1:42602). Jul 7 00:02:05.046267 sshd[6069]: Accepted publickey for core from 10.0.0.1 port 42602 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:05.048014 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:05.052404 systemd-logind[1455]: New session 15 of user core. Jul 7 00:02:05.062456 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 00:02:05.426418 sshd[6069]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:05.430598 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Jul 7 00:02:05.430824 systemd[1]: sshd@14-10.0.0.146:22-10.0.0.1:42602.service: Deactivated successfully. Jul 7 00:02:05.433730 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 00:02:05.436278 systemd-logind[1455]: Removed session 15. Jul 7 00:02:05.811089 containerd[1465]: time="2025-07-07T00:02:05.810941719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:05.812355 containerd[1465]: time="2025-07-07T00:02:05.812234309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 00:02:05.813602 containerd[1465]: time="2025-07-07T00:02:05.813564892Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:05.835357 containerd[1465]: time="2025-07-07T00:02:05.835013473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 00:02:05.835925 containerd[1465]: time="2025-07-07T00:02:05.835758880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.02230699s" Jul 7 00:02:05.835925 containerd[1465]: time="2025-07-07T00:02:05.835806050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 00:02:05.841942 containerd[1465]: time="2025-07-07T00:02:05.841874982Z" level=info msg="CreateContainer within sandbox \"9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 00:02:05.856251 containerd[1465]: time="2025-07-07T00:02:05.856052506Z" level=info msg="CreateContainer within sandbox \"9253158dc7c4a162b0114e8d73e9a31bcace6cec7c4c72a293f3fcc76a66a084\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8065e6f7ee017cafa23cf5491e9b9ba8eb46f4c0ea2a567afd93abf3803bdf7f\"" Jul 7 00:02:05.857493 containerd[1465]: time="2025-07-07T00:02:05.857453385Z" level=info msg="StartContainer for \"8065e6f7ee017cafa23cf5491e9b9ba8eb46f4c0ea2a567afd93abf3803bdf7f\"" Jul 7 00:02:05.923524 systemd[1]: Started cri-containerd-8065e6f7ee017cafa23cf5491e9b9ba8eb46f4c0ea2a567afd93abf3803bdf7f.scope - libcontainer container 8065e6f7ee017cafa23cf5491e9b9ba8eb46f4c0ea2a567afd93abf3803bdf7f. Jul 7 00:02:05.968026 containerd[1465]: time="2025-07-07T00:02:05.967966982Z" level=info msg="StartContainer for \"8065e6f7ee017cafa23cf5491e9b9ba8eb46f4c0ea2a567afd93abf3803bdf7f\" returns successfully" Jul 7 00:02:06.321534 kubelet[2515]: I0707 00:02:06.321498 2515 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 00:02:06.325492 kubelet[2515]: I0707 00:02:06.325475 2515 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 00:02:10.436550 systemd[1]: Started sshd@15-10.0.0.146:22-10.0.0.1:46566.service - OpenSSH per-connection server daemon (10.0.0.1:46566). Jul 7 00:02:10.489117 sshd[6150]: Accepted publickey for core from 10.0.0.1 port 46566 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:10.490765 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:10.494601 systemd-logind[1455]: New session 16 of user core. Jul 7 00:02:10.504453 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 00:02:10.724415 sshd[6150]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:10.729268 systemd[1]: sshd@15-10.0.0.146:22-10.0.0.1:46566.service: Deactivated successfully. Jul 7 00:02:10.731437 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 00:02:10.732205 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Jul 7 00:02:10.733389 systemd-logind[1455]: Removed session 16. Jul 7 00:02:15.061250 kubelet[2515]: E0707 00:02:15.061194 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:02:15.739532 systemd[1]: Started sshd@16-10.0.0.146:22-10.0.0.1:46574.service - OpenSSH per-connection server daemon (10.0.0.1:46574). Jul 7 00:02:15.783893 sshd[6166]: Accepted publickey for core from 10.0.0.1 port 46574 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:15.785559 sshd[6166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:15.789750 systemd-logind[1455]: New session 17 of user core. Jul 7 00:02:15.799442 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 00:02:16.059926 sshd[6166]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:16.064198 systemd[1]: sshd@16-10.0.0.146:22-10.0.0.1:46574.service: Deactivated successfully. Jul 7 00:02:16.066347 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 00:02:16.066964 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Jul 7 00:02:16.067805 systemd-logind[1455]: Removed session 17. Jul 7 00:02:21.071419 systemd[1]: Started sshd@17-10.0.0.146:22-10.0.0.1:53490.service - OpenSSH per-connection server daemon (10.0.0.1:53490). Jul 7 00:02:21.102644 sshd[6209]: Accepted publickey for core from 10.0.0.1 port 53490 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:21.104171 sshd[6209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:21.108061 systemd-logind[1455]: New session 18 of user core. Jul 7 00:02:21.115465 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 00:02:21.234820 sshd[6209]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:21.238683 systemd[1]: sshd@17-10.0.0.146:22-10.0.0.1:53490.service: Deactivated successfully. Jul 7 00:02:21.240613 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 00:02:21.241238 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Jul 7 00:02:21.242149 systemd-logind[1455]: Removed session 18. Jul 7 00:02:26.246842 systemd[1]: Started sshd@18-10.0.0.146:22-10.0.0.1:53498.service - OpenSSH per-connection server daemon (10.0.0.1:53498). Jul 7 00:02:26.277988 sshd[6223]: Accepted publickey for core from 10.0.0.1 port 53498 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:26.279675 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:26.283749 systemd-logind[1455]: New session 19 of user core. Jul 7 00:02:26.290437 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 00:02:26.394164 sshd[6223]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:26.405045 systemd[1]: sshd@18-10.0.0.146:22-10.0.0.1:53498.service: Deactivated successfully. Jul 7 00:02:26.407700 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 00:02:26.410122 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Jul 7 00:02:26.416780 systemd[1]: Started sshd@19-10.0.0.146:22-10.0.0.1:53506.service - OpenSSH per-connection server daemon (10.0.0.1:53506). Jul 7 00:02:26.419198 systemd-logind[1455]: Removed session 19. Jul 7 00:02:26.444402 sshd[6237]: Accepted publickey for core from 10.0.0.1 port 53506 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:26.445997 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:26.449994 systemd-logind[1455]: New session 20 of user core. Jul 7 00:02:26.463447 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 00:02:26.757383 sshd[6237]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:26.765117 systemd[1]: sshd@19-10.0.0.146:22-10.0.0.1:53506.service: Deactivated successfully. Jul 7 00:02:26.766856 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 00:02:26.768135 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Jul 7 00:02:26.781632 systemd[1]: Started sshd@20-10.0.0.146:22-10.0.0.1:53514.service - OpenSSH per-connection server daemon (10.0.0.1:53514). Jul 7 00:02:26.782550 systemd-logind[1455]: Removed session 20. Jul 7 00:02:26.809401 sshd[6250]: Accepted publickey for core from 10.0.0.1 port 53514 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:26.810841 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:26.814801 systemd-logind[1455]: New session 21 of user core. Jul 7 00:02:26.832528 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 00:02:27.610671 sshd[6250]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:27.621165 systemd[1]: sshd@20-10.0.0.146:22-10.0.0.1:53514.service: Deactivated successfully. Jul 7 00:02:27.623909 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 00:02:27.626215 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Jul 7 00:02:27.633556 systemd[1]: Started sshd@21-10.0.0.146:22-10.0.0.1:53524.service - OpenSSH per-connection server daemon (10.0.0.1:53524). Jul 7 00:02:27.635209 systemd-logind[1455]: Removed session 21. Jul 7 00:02:27.670351 sshd[6269]: Accepted publickey for core from 10.0.0.1 port 53524 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:27.671843 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:27.675763 systemd-logind[1455]: New session 22 of user core. Jul 7 00:02:27.686427 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 00:02:27.996095 sshd[6269]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:28.005783 systemd[1]: sshd@21-10.0.0.146:22-10.0.0.1:53524.service: Deactivated successfully. Jul 7 00:02:28.007705 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 00:02:28.009722 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Jul 7 00:02:28.018713 systemd[1]: Started sshd@22-10.0.0.146:22-10.0.0.1:53540.service - OpenSSH per-connection server daemon (10.0.0.1:53540). Jul 7 00:02:28.019840 systemd-logind[1455]: Removed session 22. Jul 7 00:02:28.049911 sshd[6282]: Accepted publickey for core from 10.0.0.1 port 53540 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:28.051856 kubelet[2515]: E0707 00:02:28.051699 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:02:28.052064 sshd[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:28.053578 kubelet[2515]: E0707 00:02:28.053440 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:02:28.058409 systemd-logind[1455]: New session 23 of user core. Jul 7 00:02:28.064468 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 00:02:28.337823 sshd[6282]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:28.341895 systemd[1]: sshd@22-10.0.0.146:22-10.0.0.1:53540.service: Deactivated successfully. Jul 7 00:02:28.343897 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 00:02:28.344644 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Jul 7 00:02:28.345486 systemd-logind[1455]: Removed session 23. Jul 7 00:02:29.051101 kubelet[2515]: E0707 00:02:29.051071 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:02:33.351077 systemd[1]: Started sshd@23-10.0.0.146:22-10.0.0.1:45044.service - OpenSSH per-connection server daemon (10.0.0.1:45044). Jul 7 00:02:33.394412 sshd[6323]: Accepted publickey for core from 10.0.0.1 port 45044 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:33.396377 sshd[6323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:33.400553 systemd-logind[1455]: New session 24 of user core. Jul 7 00:02:33.417659 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 00:02:33.606152 sshd[6323]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:33.610463 systemd[1]: sshd@23-10.0.0.146:22-10.0.0.1:45044.service: Deactivated successfully. Jul 7 00:02:33.612361 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 00:02:33.612975 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Jul 7 00:02:33.613933 systemd-logind[1455]: Removed session 24. Jul 7 00:02:35.582978 kubelet[2515]: I0707 00:02:35.582825 2515 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4s7sb" podStartSLOduration=67.555080189 podStartE2EDuration="1m22.582809432s" podCreationTimestamp="2025-07-07 00:01:13 +0000 UTC" firstStartedPulling="2025-07-07 00:01:50.808936214 +0000 UTC m=+54.874537678" lastFinishedPulling="2025-07-07 00:02:05.836665467 +0000 UTC m=+69.902266921" observedRunningTime="2025-07-07 00:02:06.858907738 +0000 UTC m=+70.924509222" watchObservedRunningTime="2025-07-07 00:02:35.582809432 +0000 UTC m=+99.648410896" Jul 7 00:02:38.622259 systemd[1]: Started sshd@24-10.0.0.146:22-10.0.0.1:34626.service - OpenSSH per-connection server daemon (10.0.0.1:34626). Jul 7 00:02:38.653544 sshd[6379]: Accepted publickey for core from 10.0.0.1 port 34626 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:38.655538 sshd[6379]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:38.660181 systemd-logind[1455]: New session 25 of user core. Jul 7 00:02:38.669475 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 00:02:38.791804 sshd[6379]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:38.796185 systemd[1]: sshd@24-10.0.0.146:22-10.0.0.1:34626.service: Deactivated successfully. Jul 7 00:02:38.798093 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 00:02:38.798809 systemd-logind[1455]: Session 25 logged out. Waiting for processes to exit. Jul 7 00:02:38.799802 systemd-logind[1455]: Removed session 25. Jul 7 00:02:43.803926 systemd[1]: Started sshd@25-10.0.0.146:22-10.0.0.1:34628.service - OpenSSH per-connection server daemon (10.0.0.1:34628). Jul 7 00:02:43.848716 sshd[6394]: Accepted publickey for core from 10.0.0.1 port 34628 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 7 00:02:43.850670 sshd[6394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 00:02:43.855096 systemd-logind[1455]: New session 26 of user core. Jul 7 00:02:43.863448 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 00:02:44.052949 kubelet[2515]: E0707 00:02:44.052463 2515 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 00:02:44.197161 sshd[6394]: pam_unix(sshd:session): session closed for user core Jul 7 00:02:44.202559 systemd[1]: sshd@25-10.0.0.146:22-10.0.0.1:34628.service: Deactivated successfully. Jul 7 00:02:44.205611 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 00:02:44.206785 systemd-logind[1455]: Session 26 logged out. Waiting for processes to exit. Jul 7 00:02:44.208036 systemd-logind[1455]: Removed session 26.