Jul 6 23:48:38.895356 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:48:38.895378 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:48:38.895389 kernel: BIOS-provided physical RAM map: Jul 6 23:48:38.895395 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:48:38.895401 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 6 23:48:38.895407 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 6 23:48:38.895415 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 6 23:48:38.895421 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 6 23:48:38.895427 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 6 23:48:38.895433 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 6 23:48:38.895442 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 6 23:48:38.895449 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 6 23:48:38.895459 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 6 23:48:38.895466 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 6 23:48:38.895476 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 6 23:48:38.895483 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 6 23:48:38.895492 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 6 23:48:38.895499 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 6 23:48:38.895506 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 6 23:48:38.895512 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 6 23:48:38.895519 kernel: NX (Execute Disable) protection: active Jul 6 23:48:38.895525 kernel: APIC: Static calls initialized Jul 6 23:48:38.895533 kernel: efi: EFI v2.7 by EDK II Jul 6 23:48:38.895542 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jul 6 23:48:38.895551 kernel: SMBIOS 2.8 present. Jul 6 23:48:38.895558 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 6 23:48:38.895565 kernel: Hypervisor detected: KVM Jul 6 23:48:38.895574 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:48:38.895581 kernel: kvm-clock: using sched offset of 5805521936 cycles Jul 6 23:48:38.895588 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:48:38.895610 kernel: tsc: Detected 2794.750 MHz processor Jul 6 23:48:38.895617 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:48:38.895624 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:48:38.895631 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 6 23:48:38.895638 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:48:38.895645 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:48:38.895655 kernel: Using GB pages for direct mapping Jul 6 23:48:38.895662 kernel: Secure boot disabled Jul 6 23:48:38.895669 kernel: ACPI: Early table checksum verification disabled Jul 6 23:48:38.895676 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 6 23:48:38.895686 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:48:38.895693 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:48:38.895701 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:48:38.895710 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 6 23:48:38.895718 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:48:38.895728 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:48:38.895735 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:48:38.895742 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:48:38.895749 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 6 23:48:38.895757 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 6 23:48:38.895767 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 6 23:48:38.895774 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 6 23:48:38.895781 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 6 23:48:38.895788 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 6 23:48:38.895795 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 6 23:48:38.895802 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 6 23:48:38.895809 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 6 23:48:38.895817 kernel: No NUMA configuration found Jul 6 23:48:38.895826 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 6 23:48:38.895836 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 6 23:48:38.895843 kernel: Zone ranges: Jul 6 23:48:38.895850 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:48:38.895857 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 6 23:48:38.895864 kernel: Normal empty Jul 6 23:48:38.895872 kernel: Movable zone start for each node Jul 6 23:48:38.895879 kernel: Early memory node ranges Jul 6 23:48:38.895886 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:48:38.895893 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 6 23:48:38.895900 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 6 23:48:38.895909 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 6 23:48:38.895917 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 6 23:48:38.895924 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 6 23:48:38.895933 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 6 23:48:38.895940 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:48:38.895947 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:48:38.895954 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 6 23:48:38.895961 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:48:38.895968 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 6 23:48:38.895978 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 6 23:48:38.895985 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 6 23:48:38.895992 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:48:38.895999 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:48:38.896007 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:48:38.896014 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:48:38.896021 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:48:38.896028 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:48:38.896035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:48:38.896045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:48:38.896052 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:48:38.896059 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:48:38.896066 kernel: TSC deadline timer available Jul 6 23:48:38.896073 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 6 23:48:38.896081 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:48:38.896088 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 6 23:48:38.896095 kernel: kvm-guest: setup PV sched yield Jul 6 23:48:38.896102 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 6 23:48:38.896111 kernel: Booting paravirtualized kernel on KVM Jul 6 23:48:38.896119 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:48:38.896126 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 6 23:48:38.896133 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 6 23:48:38.896140 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 6 23:48:38.896147 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 6 23:48:38.896154 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:48:38.896161 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:48:38.896170 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:48:38.896191 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:48:38.896198 kernel: random: crng init done Jul 6 23:48:38.896206 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:48:38.896213 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:48:38.896220 kernel: Fallback order for Node 0: 0 Jul 6 23:48:38.896227 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 6 23:48:38.896234 kernel: Policy zone: DMA32 Jul 6 23:48:38.896241 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:48:38.896251 kernel: Memory: 2395612K/2567000K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 171128K reserved, 0K cma-reserved) Jul 6 23:48:38.896259 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:48:38.896266 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:48:38.896273 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:48:38.896280 kernel: Dynamic Preempt: voluntary Jul 6 23:48:38.896296 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:48:38.896307 kernel: rcu: RCU event tracing is enabled. Jul 6 23:48:38.896315 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:48:38.896322 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:48:38.896330 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:48:38.896337 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:48:38.896345 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:48:38.896355 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:48:38.896363 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 6 23:48:38.896372 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:48:38.896380 kernel: Console: colour dummy device 80x25 Jul 6 23:48:38.896387 kernel: printk: console [ttyS0] enabled Jul 6 23:48:38.896397 kernel: ACPI: Core revision 20230628 Jul 6 23:48:38.896405 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:48:38.896413 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:48:38.896420 kernel: x2apic enabled Jul 6 23:48:38.896428 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:48:38.896435 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 6 23:48:38.896443 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 6 23:48:38.896450 kernel: kvm-guest: setup PV IPIs Jul 6 23:48:38.896458 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:48:38.896468 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:48:38.896476 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 6 23:48:38.896483 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 6 23:48:38.896491 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 6 23:48:38.896498 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 6 23:48:38.896506 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:48:38.896513 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:48:38.896521 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:48:38.896528 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 6 23:48:38.896538 kernel: RETBleed: Mitigation: untrained return thunk Jul 6 23:48:38.896546 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:48:38.896553 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:48:38.896561 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 6 23:48:38.896571 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 6 23:48:38.896579 kernel: x86/bugs: return thunk changed Jul 6 23:48:38.896586 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 6 23:48:38.896605 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:48:38.896616 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:48:38.896623 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:48:38.896631 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:48:38.896639 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:48:38.896646 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:48:38.896654 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:48:38.896661 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:48:38.896669 kernel: landlock: Up and running. Jul 6 23:48:38.896676 kernel: SELinux: Initializing. Jul 6 23:48:38.896686 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:48:38.896694 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:48:38.896701 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 6 23:48:38.896709 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:48:38.896717 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:48:38.897854 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:48:38.897864 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 6 23:48:38.897871 kernel: ... version: 0 Jul 6 23:48:38.897879 kernel: ... bit width: 48 Jul 6 23:48:38.897891 kernel: ... generic registers: 6 Jul 6 23:48:38.897901 kernel: ... value mask: 0000ffffffffffff Jul 6 23:48:38.897909 kernel: ... max period: 00007fffffffffff Jul 6 23:48:38.897917 kernel: ... fixed-purpose events: 0 Jul 6 23:48:38.897926 kernel: ... event mask: 000000000000003f Jul 6 23:48:38.897933 kernel: signal: max sigframe size: 1776 Jul 6 23:48:38.897941 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:48:38.897949 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:48:38.897956 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:48:38.897967 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:48:38.897974 kernel: .... node #0, CPUs: #1 #2 #3 Jul 6 23:48:38.897982 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:48:38.897989 kernel: smpboot: Max logical packages: 1 Jul 6 23:48:38.897997 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 6 23:48:38.898004 kernel: devtmpfs: initialized Jul 6 23:48:38.898012 kernel: x86/mm: Memory block size: 128MB Jul 6 23:48:38.898019 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 6 23:48:38.898027 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 6 23:48:38.898037 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 6 23:48:38.898044 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 6 23:48:38.898052 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 6 23:48:38.898059 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:48:38.898067 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:48:38.898074 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:48:38.898082 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:48:38.898089 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:48:38.898097 kernel: audit: type=2000 audit(1751845718.232:1): state=initialized audit_enabled=0 res=1 Jul 6 23:48:38.898107 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:48:38.898114 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:48:38.898121 kernel: cpuidle: using governor menu Jul 6 23:48:38.898129 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:48:38.898137 kernel: dca service started, version 1.12.1 Jul 6 23:48:38.898144 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 6 23:48:38.898152 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 6 23:48:38.898159 kernel: PCI: Using configuration type 1 for base access Jul 6 23:48:38.898167 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:48:38.898186 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:48:38.898194 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:48:38.898201 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:48:38.898209 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:48:38.898216 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:48:38.898224 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:48:38.898231 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:48:38.898239 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:48:38.898246 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:48:38.898256 kernel: ACPI: Interpreter enabled Jul 6 23:48:38.898263 kernel: ACPI: PM: (supports S0 S3 S5) Jul 6 23:48:38.898271 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:48:38.898278 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:48:38.898286 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:48:38.898293 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 6 23:48:38.898301 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:48:38.898509 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:48:38.898667 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 6 23:48:38.898797 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 6 23:48:38.898808 kernel: PCI host bridge to bus 0000:00 Jul 6 23:48:38.898937 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:48:38.899054 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:48:38.899170 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:48:38.900477 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 6 23:48:38.900618 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:48:38.900734 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 6 23:48:38.900849 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:48:38.901001 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 6 23:48:38.901146 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 6 23:48:38.901285 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 6 23:48:38.901419 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 6 23:48:38.901545 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 6 23:48:38.901706 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 6 23:48:38.901835 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:48:38.901985 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 6 23:48:38.902116 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 6 23:48:38.902251 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 6 23:48:38.903467 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 6 23:48:38.903647 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:48:38.903782 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 6 23:48:38.903909 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 6 23:48:38.904036 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 6 23:48:38.904183 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:48:38.904313 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 6 23:48:38.904447 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 6 23:48:38.904573 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 6 23:48:38.904740 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 6 23:48:38.904881 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 6 23:48:38.905008 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 6 23:48:38.905162 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 6 23:48:38.905300 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 6 23:48:38.905430 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 6 23:48:38.905574 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 6 23:48:38.905719 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 6 23:48:38.905730 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:48:38.905738 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:48:38.905746 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:48:38.905754 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:48:38.905766 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 6 23:48:38.905773 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 6 23:48:38.905781 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 6 23:48:38.905788 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 6 23:48:38.905796 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 6 23:48:38.905804 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 6 23:48:38.905811 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 6 23:48:38.905819 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 6 23:48:38.905826 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 6 23:48:38.905836 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 6 23:48:38.905844 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 6 23:48:38.905851 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 6 23:48:38.905859 kernel: iommu: Default domain type: Translated Jul 6 23:48:38.905866 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:48:38.905874 kernel: efivars: Registered efivars operations Jul 6 23:48:38.905882 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:48:38.905890 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:48:38.905897 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 6 23:48:38.905908 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 6 23:48:38.905915 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 6 23:48:38.905922 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 6 23:48:38.906050 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 6 23:48:38.906186 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 6 23:48:38.906314 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:48:38.906324 kernel: vgaarb: loaded Jul 6 23:48:38.906332 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:48:38.906339 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:48:38.906351 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:48:38.906359 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:48:38.906367 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:48:38.906374 kernel: pnp: PnP ACPI init Jul 6 23:48:38.906527 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 6 23:48:38.906539 kernel: pnp: PnP ACPI: found 6 devices Jul 6 23:48:38.906547 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:48:38.906554 kernel: NET: Registered PF_INET protocol family Jul 6 23:48:38.906565 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:48:38.906573 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:48:38.906581 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:48:38.906589 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:48:38.906655 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:48:38.906663 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:48:38.906671 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:48:38.906678 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:48:38.906686 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:48:38.906697 kernel: NET: Registered PF_XDP protocol family Jul 6 23:48:38.906828 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 6 23:48:38.906954 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 6 23:48:38.908105 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:48:38.908237 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:48:38.908352 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:48:38.908466 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 6 23:48:38.908579 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 6 23:48:38.908716 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 6 23:48:38.908727 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:48:38.908735 kernel: Initialise system trusted keyrings Jul 6 23:48:38.908743 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:48:38.908750 kernel: Key type asymmetric registered Jul 6 23:48:38.908758 kernel: Asymmetric key parser 'x509' registered Jul 6 23:48:38.908765 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:48:38.908773 kernel: io scheduler mq-deadline registered Jul 6 23:48:38.908780 kernel: io scheduler kyber registered Jul 6 23:48:38.908793 kernel: io scheduler bfq registered Jul 6 23:48:38.908800 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:48:38.908808 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 6 23:48:38.908816 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 6 23:48:38.908824 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 6 23:48:38.908831 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:48:38.908839 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:48:38.908847 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:48:38.908854 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:48:38.908864 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:48:38.909010 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 6 23:48:38.909021 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:48:38.909140 kernel: rtc_cmos 00:04: registered as rtc0 Jul 6 23:48:38.909270 kernel: rtc_cmos 00:04: setting system clock to 2025-07-06T23:48:38 UTC (1751845718) Jul 6 23:48:38.909389 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 6 23:48:38.909399 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:48:38.909411 kernel: efifb: probing for efifb Jul 6 23:48:38.909418 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 6 23:48:38.909426 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 6 23:48:38.909434 kernel: efifb: scrolling: redraw Jul 6 23:48:38.909442 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 6 23:48:38.909449 kernel: Console: switching to colour frame buffer device 100x37 Jul 6 23:48:38.909457 kernel: fb0: EFI VGA frame buffer device Jul 6 23:48:38.909484 kernel: pstore: Using crash dump compression: deflate Jul 6 23:48:38.909494 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:48:38.909502 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:48:38.909512 kernel: Segment Routing with IPv6 Jul 6 23:48:38.909520 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:48:38.909528 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:48:38.909536 kernel: Key type dns_resolver registered Jul 6 23:48:38.909544 kernel: IPI shorthand broadcast: enabled Jul 6 23:48:38.909552 kernel: sched_clock: Marking stable (931002951, 119465452)->(1105788553, -55320150) Jul 6 23:48:38.909560 kernel: registered taskstats version 1 Jul 6 23:48:38.909567 kernel: Loading compiled-in X.509 certificates Jul 6 23:48:38.909576 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:48:38.909586 kernel: Key type .fscrypt registered Jul 6 23:48:38.909607 kernel: Key type fscrypt-provisioning registered Jul 6 23:48:38.909615 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:48:38.909623 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:48:38.909631 kernel: ima: No architecture policies found Jul 6 23:48:38.909639 kernel: clk: Disabling unused clocks Jul 6 23:48:38.909647 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:48:38.909655 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:48:38.909666 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:48:38.909674 kernel: Run /init as init process Jul 6 23:48:38.909682 kernel: with arguments: Jul 6 23:48:38.909690 kernel: /init Jul 6 23:48:38.909697 kernel: with environment: Jul 6 23:48:38.909705 kernel: HOME=/ Jul 6 23:48:38.909713 kernel: TERM=linux Jul 6 23:48:38.909721 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:48:38.909731 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:48:38.909744 systemd[1]: Detected virtualization kvm. Jul 6 23:48:38.909752 systemd[1]: Detected architecture x86-64. Jul 6 23:48:38.909761 systemd[1]: Running in initrd. Jul 6 23:48:38.909769 systemd[1]: No hostname configured, using default hostname. Jul 6 23:48:38.909777 systemd[1]: Hostname set to . Jul 6 23:48:38.909788 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:48:38.909797 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:48:38.909805 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:48:38.909814 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:48:38.909823 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:48:38.909831 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:48:38.909848 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:48:38.909860 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:48:38.909870 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:48:38.909879 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:48:38.909887 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:48:38.909896 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:48:38.909904 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:48:38.909912 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:48:38.909923 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:48:38.909932 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:48:38.909940 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:48:38.909948 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:48:38.909957 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:48:38.909965 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:48:38.909974 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:48:38.909982 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:48:38.909990 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:48:38.910001 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:48:38.910016 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:48:38.910024 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:48:38.910033 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:48:38.910041 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:48:38.910049 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:48:38.910058 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:48:38.910066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:48:38.910077 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:48:38.910085 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:48:38.910094 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:48:38.910103 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:48:38.910133 systemd-journald[191]: Collecting audit messages is disabled. Jul 6 23:48:38.910156 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:48:38.910165 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:48:38.910181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:48:38.910192 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:48:38.910201 systemd-journald[191]: Journal started Jul 6 23:48:38.910219 systemd-journald[191]: Runtime Journal (/run/log/journal/14336fd23cbb40c1bfac039100a0f19b) is 6.0M, max 48.3M, 42.2M free. Jul 6 23:48:38.892577 systemd-modules-load[194]: Inserted module 'overlay' Jul 6 23:48:38.912631 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:48:38.917192 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:48:38.918798 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:48:38.924664 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:48:38.926378 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:48:38.929924 kernel: Bridge firewalling registered Jul 6 23:48:38.926711 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 6 23:48:38.927909 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:48:38.930112 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:48:38.934250 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:48:38.936433 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:48:38.945257 dracut-cmdline[220]: dracut-dracut-053 Jul 6 23:48:38.947461 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:48:38.949412 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:48:38.954753 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:48:38.985543 systemd-resolved[238]: Positive Trust Anchors: Jul 6 23:48:38.985556 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:48:38.985588 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:48:38.988442 systemd-resolved[238]: Defaulting to hostname 'linux'. Jul 6 23:48:38.989635 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:48:38.994139 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:48:39.046626 kernel: SCSI subsystem initialized Jul 6 23:48:39.055619 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:48:39.066633 kernel: iscsi: registered transport (tcp) Jul 6 23:48:39.087633 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:48:39.087657 kernel: QLogic iSCSI HBA Driver Jul 6 23:48:39.137577 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:48:39.149747 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:48:39.173926 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:48:39.173965 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:48:39.174907 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:48:39.216621 kernel: raid6: avx2x4 gen() 30279 MB/s Jul 6 23:48:39.233621 kernel: raid6: avx2x2 gen() 30843 MB/s Jul 6 23:48:39.250647 kernel: raid6: avx2x1 gen() 25967 MB/s Jul 6 23:48:39.250665 kernel: raid6: using algorithm avx2x2 gen() 30843 MB/s Jul 6 23:48:39.268653 kernel: raid6: .... xor() 19937 MB/s, rmw enabled Jul 6 23:48:39.268676 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:48:39.289624 kernel: xor: automatically using best checksumming function avx Jul 6 23:48:39.489672 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:48:39.506282 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:48:39.520831 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:48:39.532948 systemd-udevd[411]: Using default interface naming scheme 'v255'. Jul 6 23:48:39.538008 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:48:39.546800 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:48:39.563318 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jul 6 23:48:39.599632 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:48:39.609836 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:48:39.679939 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:48:39.689827 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:48:39.703433 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:48:39.708189 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:48:39.709427 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:48:39.710931 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:48:39.727916 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:48:39.745927 kernel: libata version 3.00 loaded. Jul 6 23:48:39.747693 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:48:39.752637 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 6 23:48:39.757126 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:48:39.757164 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:48:39.763476 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:48:39.763504 kernel: GPT:9289727 != 19775487 Jul 6 23:48:39.763515 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:48:39.763530 kernel: GPT:9289727 != 19775487 Jul 6 23:48:39.763540 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:48:39.763550 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:48:39.765226 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:48:39.775284 kernel: ahci 0000:00:1f.2: version 3.0 Jul 6 23:48:39.775489 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 6 23:48:39.775507 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 6 23:48:39.775684 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 6 23:48:39.775830 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:48:39.775841 kernel: AES CTR mode by8 optimization enabled Jul 6 23:48:39.765404 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:48:39.768114 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:48:39.779430 kernel: scsi host0: ahci Jul 6 23:48:39.769370 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:48:39.781341 kernel: scsi host1: ahci Jul 6 23:48:39.781524 kernel: scsi host2: ahci Jul 6 23:48:39.769601 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:48:39.772453 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:48:39.783254 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:48:39.788492 kernel: scsi host3: ahci Jul 6 23:48:39.788700 kernel: scsi host4: ahci Jul 6 23:48:39.788856 kernel: scsi host5: ahci Jul 6 23:48:39.791876 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 6 23:48:39.791894 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 6 23:48:39.791905 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 6 23:48:39.793359 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 6 23:48:39.793413 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 6 23:48:39.793424 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 6 23:48:39.802620 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (468) Jul 6 23:48:39.805665 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (458) Jul 6 23:48:39.807832 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:48:39.815617 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:48:39.828296 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:48:39.834374 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:48:39.836747 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:48:39.847733 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:48:39.849921 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:48:39.850894 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:48:39.853462 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:48:39.856405 disk-uuid[553]: Primary Header is updated. Jul 6 23:48:39.856405 disk-uuid[553]: Secondary Entries is updated. Jul 6 23:48:39.856405 disk-uuid[553]: Secondary Header is updated. Jul 6 23:48:39.856547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:48:39.862620 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:48:39.866616 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:48:39.874019 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:48:39.885823 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:48:39.921342 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:48:40.104757 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 6 23:48:40.104828 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:48:40.104840 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:48:40.105626 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 6 23:48:40.106620 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:48:40.107612 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 6 23:48:40.107644 kernel: ata3.00: applying bridge limits Jul 6 23:48:40.108615 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 6 23:48:40.109622 kernel: ata3.00: configured for UDMA/100 Jul 6 23:48:40.109635 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:48:40.173169 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 6 23:48:40.173524 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:48:40.185663 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:48:40.867959 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:48:40.868029 disk-uuid[555]: The operation has completed successfully. Jul 6 23:48:40.892230 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:48:40.892368 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:48:40.924759 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:48:40.928507 sh[595]: Success Jul 6 23:48:40.941646 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 6 23:48:40.974475 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:48:40.987246 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:48:40.990156 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:48:41.003133 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:48:41.003222 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:48:41.003234 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:48:41.004505 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:48:41.005319 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:48:41.010204 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:48:41.011728 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:48:41.018840 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:48:41.020729 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:48:41.029379 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:48:41.029437 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:48:41.029448 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:48:41.032635 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:48:41.042688 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:48:41.044302 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:48:41.054288 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:48:41.059803 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:48:41.124175 ignition[685]: Ignition 2.19.0 Jul 6 23:48:41.124186 ignition[685]: Stage: fetch-offline Jul 6 23:48:41.124224 ignition[685]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:48:41.124235 ignition[685]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:48:41.124340 ignition[685]: parsed url from cmdline: "" Jul 6 23:48:41.124345 ignition[685]: no config URL provided Jul 6 23:48:41.124350 ignition[685]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:48:41.124360 ignition[685]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:48:41.124390 ignition[685]: op(1): [started] loading QEMU firmware config module Jul 6 23:48:41.124396 ignition[685]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:48:41.133177 ignition[685]: op(1): [finished] loading QEMU firmware config module Jul 6 23:48:41.150134 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:48:41.161759 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:48:41.174704 ignition[685]: parsing config with SHA512: ea96ef974f06be70102cbabde078934cd7c3fbccbdee55eeb6b36d0fc1910832ac08fda6627e3b68e4da66098a3650ad0f6df013d47f9f08c38e98be0b6eecf9 Jul 6 23:48:41.178808 unknown[685]: fetched base config from "system" Jul 6 23:48:41.179780 unknown[685]: fetched user config from "qemu" Jul 6 23:48:41.180577 ignition[685]: fetch-offline: fetch-offline passed Jul 6 23:48:41.180752 ignition[685]: Ignition finished successfully Jul 6 23:48:41.183790 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:48:41.186965 systemd-networkd[783]: lo: Link UP Jul 6 23:48:41.186976 systemd-networkd[783]: lo: Gained carrier Jul 6 23:48:41.188786 systemd-networkd[783]: Enumeration completed Jul 6 23:48:41.188901 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:48:41.189300 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:48:41.189304 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:48:41.189565 systemd[1]: Reached target network.target - Network. Jul 6 23:48:41.190681 systemd-networkd[783]: eth0: Link UP Jul 6 23:48:41.190686 systemd-networkd[783]: eth0: Gained carrier Jul 6 23:48:41.190694 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:48:41.191507 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:48:41.199781 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:48:41.204666 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:48:41.218118 ignition[786]: Ignition 2.19.0 Jul 6 23:48:41.218131 ignition[786]: Stage: kargs Jul 6 23:48:41.218358 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:48:41.218371 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:48:41.219279 ignition[786]: kargs: kargs passed Jul 6 23:48:41.219328 ignition[786]: Ignition finished successfully Jul 6 23:48:41.225994 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:48:41.233787 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:48:41.248521 ignition[795]: Ignition 2.19.0 Jul 6 23:48:41.248533 ignition[795]: Stage: disks Jul 6 23:48:41.248718 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:48:41.248730 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:48:41.249466 ignition[795]: disks: disks passed Jul 6 23:48:41.251754 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:48:41.249507 ignition[795]: Ignition finished successfully Jul 6 23:48:41.252971 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:48:41.254565 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:48:41.256660 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:48:41.257672 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:48:41.259385 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:48:41.269752 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:48:41.281374 systemd-resolved[238]: Detected conflict on linux IN A 10.0.0.53 Jul 6 23:48:41.281393 systemd-resolved[238]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Jul 6 23:48:41.283523 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:48:41.289841 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:48:41.305716 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:48:41.394620 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:48:41.394811 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:48:41.396270 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:48:41.406681 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:48:41.407781 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:48:41.409272 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:48:41.409325 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:48:41.409352 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:48:41.418235 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:48:41.423043 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Jul 6 23:48:41.423068 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:48:41.423079 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:48:41.423090 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:48:41.420370 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:48:41.428619 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:48:41.429768 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:48:41.461783 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:48:41.467404 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:48:41.472620 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:48:41.477628 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:48:41.568582 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:48:41.575734 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:48:41.577380 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:48:41.584620 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:48:41.605710 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:48:41.609093 ignition[926]: INFO : Ignition 2.19.0 Jul 6 23:48:41.609093 ignition[926]: INFO : Stage: mount Jul 6 23:48:41.610795 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:48:41.610795 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:48:41.610795 ignition[926]: INFO : mount: mount passed Jul 6 23:48:41.610795 ignition[926]: INFO : Ignition finished successfully Jul 6 23:48:41.614050 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:48:41.622804 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:48:42.002519 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:48:42.015895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:48:42.024621 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Jul 6 23:48:42.024668 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:48:42.026537 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:48:42.026563 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:48:42.029630 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:48:42.031916 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:48:42.054740 ignition[957]: INFO : Ignition 2.19.0 Jul 6 23:48:42.054740 ignition[957]: INFO : Stage: files Jul 6 23:48:42.056578 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:48:42.056578 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:48:42.056578 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:48:42.060565 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:48:42.060565 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:48:42.060565 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:48:42.060565 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:48:42.060565 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:48:42.059893 unknown[957]: wrote ssh authorized keys file for user: core Jul 6 23:48:42.068491 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:48:42.068491 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 6 23:48:42.116786 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:48:42.256095 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 6 23:48:42.256095 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:48:42.259752 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:48:42.261541 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:48:42.263341 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:48:42.264955 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:48:42.266667 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:48:42.268300 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:48:42.269983 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:48:42.271858 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:48:42.273705 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:48:42.275424 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:48:42.277824 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:48:42.280156 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:48:42.282158 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:48:42.361797 systemd-networkd[783]: eth0: Gained IPv6LL Jul 6 23:48:42.995491 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:48:43.322240 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:48:43.322240 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:48:43.326668 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:48:43.326668 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:48:43.326668 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:48:43.326668 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 6 23:48:43.326668 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:48:43.326668 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:48:43.326668 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 6 23:48:43.326668 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:48:43.349811 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:48:43.355401 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:48:43.356914 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:48:43.356914 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:48:43.356914 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:48:43.356914 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:48:43.356914 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:48:43.356914 ignition[957]: INFO : files: files passed Jul 6 23:48:43.356914 ignition[957]: INFO : Ignition finished successfully Jul 6 23:48:43.358846 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:48:43.379980 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:48:43.383253 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:48:43.385165 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:48:43.385283 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:48:43.394920 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:48:43.398615 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:48:43.398615 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:48:43.403384 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:48:43.401452 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:48:43.403642 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:48:43.416834 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:48:43.442943 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:48:43.443108 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:48:43.445272 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:48:43.447220 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:48:43.447654 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:48:43.448565 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:48:43.488536 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:48:43.500757 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:48:43.511680 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:48:43.513030 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:48:43.515328 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:48:43.517469 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:48:43.517612 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:48:43.519966 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:48:43.521439 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:48:43.523435 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:48:43.525752 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:48:43.527854 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:48:43.529900 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:48:43.531908 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:48:43.534069 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:48:43.535931 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:48:43.538106 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:48:43.539860 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:48:43.540022 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:48:43.542019 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:48:43.543514 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:48:43.545496 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:48:43.545675 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:48:43.547709 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:48:43.547826 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:48:43.549990 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:48:43.550159 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:48:43.552111 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:48:43.553733 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:48:43.557713 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:48:43.559296 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:48:43.561069 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:48:43.562819 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:48:43.562965 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:48:43.564784 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:48:43.564881 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:48:43.567092 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:48:43.567224 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:48:43.569199 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:48:43.569314 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:48:43.577767 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:48:43.580396 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:48:43.581364 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:48:43.581541 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:48:43.583565 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:48:43.583752 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:48:43.592547 ignition[1011]: INFO : Ignition 2.19.0 Jul 6 23:48:43.592547 ignition[1011]: INFO : Stage: umount Jul 6 23:48:43.592547 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:48:43.592547 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:48:43.592398 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:48:43.598547 ignition[1011]: INFO : umount: umount passed Jul 6 23:48:43.598547 ignition[1011]: INFO : Ignition finished successfully Jul 6 23:48:43.592524 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:48:43.595190 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:48:43.595334 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:48:43.596484 systemd[1]: Stopped target network.target - Network. Jul 6 23:48:43.598351 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:48:43.598419 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:48:43.598858 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:48:43.598915 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:48:43.599185 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:48:43.599241 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:48:43.599477 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:48:43.599527 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:48:43.599958 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:48:43.607014 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:48:43.614666 systemd-networkd[783]: eth0: DHCPv6 lease lost Jul 6 23:48:43.617100 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:48:43.617273 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:48:43.619242 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:48:43.619407 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:48:43.622150 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:48:43.622251 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:48:43.628713 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:48:43.629446 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:48:43.629518 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:48:43.630039 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:48:43.630122 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:48:43.630397 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:48:43.630466 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:48:43.630928 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:48:43.630990 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:48:43.631660 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:48:43.648526 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:48:43.648697 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:48:43.652069 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:48:43.652289 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:48:43.653507 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:48:43.653566 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:48:43.655582 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:48:43.655650 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:48:43.656030 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:48:43.656090 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:48:43.656808 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:48:43.656858 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:48:43.657469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:48:43.657517 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:48:43.658954 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:48:43.667101 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:48:43.667185 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:48:43.667460 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:48:43.667526 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:48:43.672095 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:48:43.672171 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:48:43.674245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:48:43.674303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:48:43.675121 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:48:43.675242 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:48:43.698569 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:48:44.191383 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:48:44.191542 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:48:44.193580 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:48:44.194480 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:48:44.194538 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:48:44.202949 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:48:44.210295 systemd[1]: Switching root. Jul 6 23:48:44.248294 systemd-journald[191]: Journal stopped Jul 6 23:48:45.647832 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jul 6 23:48:45.647911 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:48:45.647925 kernel: SELinux: policy capability open_perms=1 Jul 6 23:48:45.647940 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:48:45.647952 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:48:45.647964 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:48:45.647975 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:48:45.647995 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:48:45.648007 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:48:45.648018 kernel: audit: type=1403 audit(1751845724.907:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:48:45.648031 systemd[1]: Successfully loaded SELinux policy in 41.938ms. Jul 6 23:48:45.648061 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.107ms. Jul 6 23:48:45.648078 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:48:45.648952 systemd[1]: Detected virtualization kvm. Jul 6 23:48:45.648967 systemd[1]: Detected architecture x86-64. Jul 6 23:48:45.648979 systemd[1]: Detected first boot. Jul 6 23:48:45.648999 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:48:45.649012 zram_generator::config[1056]: No configuration found. Jul 6 23:48:45.649026 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:48:45.649038 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:48:45.649058 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:48:45.649070 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:48:45.649083 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:48:45.649095 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:48:45.649108 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:48:45.649120 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:48:45.649132 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:48:45.649144 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:48:45.649156 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:48:45.649171 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:48:45.649183 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:48:45.649195 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:48:45.649207 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:48:45.649219 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:48:45.649231 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:48:45.649243 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:48:45.649255 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:48:45.649267 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:48:45.649281 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:48:45.649298 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:48:45.649311 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:48:45.649322 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:48:45.649334 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:48:45.649351 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:48:45.649363 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:48:45.649375 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:48:45.649390 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:48:45.649402 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:48:45.649414 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:48:45.649426 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:48:45.649438 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:48:45.649451 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:48:45.649465 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:48:45.649477 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:48:45.649490 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:48:45.649505 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:48:45.649517 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:48:45.649529 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:48:45.649542 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:48:45.649555 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:48:45.649567 systemd[1]: Reached target machines.target - Containers. Jul 6 23:48:45.649580 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:48:45.649605 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:48:45.649621 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:48:45.649634 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:48:45.649645 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:48:45.649657 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:48:45.649669 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:48:45.649682 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:48:45.649694 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:48:45.649706 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:48:45.649720 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:48:45.649732 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:48:45.649746 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:48:45.649757 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:48:45.649770 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:48:45.649791 kernel: fuse: init (API version 7.39) Jul 6 23:48:45.649803 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:48:45.649815 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:48:45.649827 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:48:45.649841 kernel: loop: module loaded Jul 6 23:48:45.649853 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:48:45.649865 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:48:45.649877 systemd[1]: Stopped verity-setup.service. Jul 6 23:48:45.649890 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:48:45.649902 kernel: ACPI: bus type drm_connector registered Jul 6 23:48:45.649916 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:48:45.649928 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:48:45.649939 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:48:45.649951 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:48:45.649983 systemd-journald[1130]: Collecting audit messages is disabled. Jul 6 23:48:45.650012 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:48:45.650028 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:48:45.650040 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:48:45.650054 systemd-journald[1130]: Journal started Jul 6 23:48:45.650076 systemd-journald[1130]: Runtime Journal (/run/log/journal/14336fd23cbb40c1bfac039100a0f19b) is 6.0M, max 48.3M, 42.2M free. Jul 6 23:48:45.426102 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:48:45.442115 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:48:45.442609 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:48:45.653005 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:48:45.653653 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:48:45.655320 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:48:45.655509 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:48:45.657032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:48:45.657213 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:48:45.658632 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:48:45.658830 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:48:45.660299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:48:45.660507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:48:45.662044 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:48:45.662231 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:48:45.663611 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:48:45.663797 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:48:45.665189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:48:45.666781 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:48:45.668292 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:48:45.683675 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:48:45.696688 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:48:45.699297 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:48:45.700522 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:48:45.700634 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:48:45.702684 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:48:45.705218 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:48:45.708760 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:48:45.709937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:48:45.712449 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:48:45.715141 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:48:45.716324 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:48:45.718967 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:48:45.720189 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:48:45.721580 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:48:45.724232 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:48:45.726847 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:48:45.730691 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:48:45.732009 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:48:45.734083 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:48:45.740710 systemd-journald[1130]: Time spent on flushing to /var/log/journal/14336fd23cbb40c1bfac039100a0f19b is 14.136ms for 1000 entries. Jul 6 23:48:45.740710 systemd-journald[1130]: System Journal (/var/log/journal/14336fd23cbb40c1bfac039100a0f19b) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:48:45.784909 systemd-journald[1130]: Received client request to flush runtime journal. Jul 6 23:48:45.784965 kernel: loop0: detected capacity change from 0 to 140768 Jul 6 23:48:45.736775 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:48:45.745016 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:48:45.758801 udevadm[1177]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:48:45.762796 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:48:45.767029 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:48:45.778947 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:48:45.780891 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:48:45.787576 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:48:45.788754 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:48:45.788996 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jul 6 23:48:45.789017 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Jul 6 23:48:45.797763 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:48:45.805822 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:48:45.807829 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:48:45.808463 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:48:45.816617 kernel: loop1: detected capacity change from 0 to 142488 Jul 6 23:48:45.836210 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:48:45.844820 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:48:45.850626 kernel: loop2: detected capacity change from 0 to 221472 Jul 6 23:48:45.863044 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jul 6 23:48:45.863065 systemd-tmpfiles[1193]: ACLs are not supported, ignoring. Jul 6 23:48:45.868927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:48:45.887630 kernel: loop3: detected capacity change from 0 to 140768 Jul 6 23:48:45.899829 kernel: loop4: detected capacity change from 0 to 142488 Jul 6 23:48:45.911621 kernel: loop5: detected capacity change from 0 to 221472 Jul 6 23:48:45.917556 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:48:45.919241 (sd-merge)[1197]: Merged extensions into '/usr'. Jul 6 23:48:45.923349 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:48:45.923445 systemd[1]: Reloading... Jul 6 23:48:45.980813 zram_generator::config[1222]: No configuration found. Jul 6 23:48:46.066201 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:48:46.109878 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:48:46.158950 systemd[1]: Reloading finished in 234 ms. Jul 6 23:48:46.191418 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:48:46.193127 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:48:46.205771 systemd[1]: Starting ensure-sysext.service... Jul 6 23:48:46.207703 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:48:46.215947 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:48:46.215964 systemd[1]: Reloading... Jul 6 23:48:46.232853 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:48:46.233259 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:48:46.234336 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:48:46.234666 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jul 6 23:48:46.234754 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Jul 6 23:48:46.238570 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:48:46.238583 systemd-tmpfiles[1261]: Skipping /boot Jul 6 23:48:46.252719 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:48:46.252803 systemd-tmpfiles[1261]: Skipping /boot Jul 6 23:48:46.285685 zram_generator::config[1297]: No configuration found. Jul 6 23:48:46.674360 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:48:46.724471 systemd[1]: Reloading finished in 508 ms. Jul 6 23:48:46.745443 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:48:46.765856 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:48:46.768615 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:48:46.771374 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:48:46.775528 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:48:46.782810 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:48:46.789272 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:48:46.827666 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:48:46.827925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:48:46.829737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:48:46.832322 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:48:46.838483 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:48:46.840038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:48:46.840172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:48:46.841201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:48:46.841400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:48:46.843205 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:48:46.843405 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:48:46.845428 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:48:46.845633 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:48:46.854221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:48:46.854479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:48:46.865541 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:48:46.868916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:48:46.874887 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:48:46.875457 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:48:46.875622 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:48:46.876970 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:48:46.878382 augenrules[1356]: No rules Jul 6 23:48:46.879451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:48:46.879801 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:48:46.882211 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:48:46.884250 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:48:46.884437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:48:46.886573 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:48:46.886870 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:48:46.892144 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:48:46.899869 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:48:46.900125 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:48:46.904872 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:48:46.908431 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:48:46.913757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:48:46.929826 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:48:46.931113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:48:46.931298 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:48:46.932167 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:48:46.935331 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:48:46.935524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:48:46.937250 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:48:46.937427 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:48:46.939119 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:48:46.941051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:48:46.941238 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:48:46.943092 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:48:46.943272 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:48:46.947055 systemd[1]: Finished ensure-sysext.service. Jul 6 23:48:46.952495 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:48:46.952589 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:48:46.966866 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:48:47.016267 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:48:47.016740 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:48:47.027251 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:48:47.030779 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:48:47.033726 systemd-resolved[1330]: Positive Trust Anchors: Jul 6 23:48:47.033747 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:48:47.033778 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:48:47.042343 systemd-resolved[1330]: Defaulting to hostname 'linux'. Jul 6 23:48:47.044423 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:48:47.045673 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:48:47.048354 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:48:47.065468 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Jul 6 23:48:47.080222 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:48:47.130829 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:48:47.133678 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:48:47.143851 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:48:47.180359 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:48:47.204656 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1389) Jul 6 23:48:47.214108 systemd-networkd[1393]: lo: Link UP Jul 6 23:48:47.214120 systemd-networkd[1393]: lo: Gained carrier Jul 6 23:48:47.216308 systemd-networkd[1393]: Enumeration completed Jul 6 23:48:47.216466 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:48:47.217584 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:48:47.217604 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:48:47.217771 systemd[1]: Reached target network.target - Network. Jul 6 23:48:47.218415 systemd-networkd[1393]: eth0: Link UP Jul 6 23:48:47.218420 systemd-networkd[1393]: eth0: Gained carrier Jul 6 23:48:47.218433 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:48:47.226864 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:48:47.229660 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.53/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:48:47.230461 systemd-timesyncd[1382]: Network configuration changed, trying to establish connection. Jul 6 23:48:47.888152 systemd-timesyncd[1382]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:48:47.888210 systemd-timesyncd[1382]: Initial clock synchronization to Sun 2025-07-06 23:48:47.888045 UTC. Jul 6 23:48:47.888661 systemd-resolved[1330]: Clock change detected. Flushing caches. Jul 6 23:48:47.895279 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:48:47.900558 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 6 23:48:47.901793 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:48:47.904565 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:48:47.914162 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 6 23:48:47.916000 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 6 23:48:47.921984 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:48:47.923717 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 6 23:48:47.923963 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 6 23:48:47.930576 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 6 23:48:47.958860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:48:47.988605 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:48:47.988856 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:48:47.992561 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:48:48.045716 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:48:48.055816 kernel: kvm_amd: TSC scaling supported Jul 6 23:48:48.055853 kernel: kvm_amd: Nested Virtualization enabled Jul 6 23:48:48.055867 kernel: kvm_amd: Nested Paging enabled Jul 6 23:48:48.055889 kernel: kvm_amd: LBR virtualization supported Jul 6 23:48:48.056905 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 6 23:48:48.056981 kernel: kvm_amd: Virtual GIF supported Jul 6 23:48:48.080565 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:48:48.109051 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:48:48.110802 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:48:48.124798 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:48:48.132987 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:48:48.162169 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:48:48.183932 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:48:48.185039 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:48:48.186196 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:48:48.212460 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:48:48.214090 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:48:48.215227 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:48:48.216443 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:48:48.217663 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:48:48.217690 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:48:48.218590 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:48:48.220418 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:48:48.223220 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:48:48.241530 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:48:48.243945 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:48:48.245472 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:48:48.246650 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:48:48.247588 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:48:48.248514 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:48:48.248553 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:48:48.249683 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:48:48.251824 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:48:48.254088 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:48:48.255642 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:48:48.257779 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:48:48.258910 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:48:48.262732 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:48:48.267652 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:48:48.269961 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:48:48.272187 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:48:48.274785 jq[1443]: false Jul 6 23:48:48.277848 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:48:48.279713 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:48:48.280177 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:48:48.281344 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:48:48.285523 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:48:48.287794 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:48:48.290940 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:48:48.291165 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:48:48.291511 extend-filesystems[1444]: Found loop3 Jul 6 23:48:48.310652 extend-filesystems[1444]: Found loop4 Jul 6 23:48:48.310652 extend-filesystems[1444]: Found loop5 Jul 6 23:48:48.310652 extend-filesystems[1444]: Found sr0 Jul 6 23:48:48.310652 extend-filesystems[1444]: Found vda Jul 6 23:48:48.310652 extend-filesystems[1444]: Found vda1 Jul 6 23:48:48.310652 extend-filesystems[1444]: Found vda2 Jul 6 23:48:48.310652 extend-filesystems[1444]: Found vda3 Jul 6 23:48:48.310652 extend-filesystems[1444]: Found usr Jul 6 23:48:48.310652 extend-filesystems[1444]: Found vda4 Jul 6 23:48:48.310652 extend-filesystems[1444]: Found vda6 Jul 6 23:48:48.310652 extend-filesystems[1444]: Found vda7 Jul 6 23:48:48.310652 extend-filesystems[1444]: Found vda9 Jul 6 23:48:48.310652 extend-filesystems[1444]: Checking size of /dev/vda9 Jul 6 23:48:48.316190 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:48:48.318655 dbus-daemon[1442]: [system] SELinux support is enabled Jul 6 23:48:48.317088 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:48:48.327261 update_engine[1451]: I20250706 23:48:48.324362 1451 main.cc:92] Flatcar Update Engine starting Jul 6 23:48:48.327261 update_engine[1451]: I20250706 23:48:48.326906 1451 update_check_scheduler.cc:74] Next update check in 5m55s Jul 6 23:48:48.322338 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:48:48.340860 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:48:48.341134 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:48:48.343226 jq[1453]: true Jul 6 23:48:48.352147 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:48:48.352185 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:48:48.352838 tar[1458]: linux-amd64/helm Jul 6 23:48:48.353582 (ntainerd)[1469]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:48:48.357654 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:48:48.357681 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:48:48.358952 jq[1470]: true Jul 6 23:48:48.363787 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:48:48.368905 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:48:48.374058 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:48:48.374090 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:48:48.377041 systemd-logind[1450]: New seat seat0. Jul 6 23:48:48.382286 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:48:48.387969 extend-filesystems[1444]: Resized partition /dev/vda9 Jul 6 23:48:48.391997 extend-filesystems[1488]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:48:48.432576 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1390) Jul 6 23:48:48.488566 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:48:48.505376 locksmithd[1478]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:48:48.678803 sshd_keygen[1460]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:48:48.703011 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:48:48.710818 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:48:48.719394 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:48:48.719695 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:48:48.727785 tar[1458]: linux-amd64/LICENSE Jul 6 23:48:48.727882 tar[1458]: linux-amd64/README.md Jul 6 23:48:48.728827 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:48:48.740787 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:48:48.784441 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:48:48.827823 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:48:48.829953 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:48:48.831167 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:48:48.990581 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:48:49.041787 extend-filesystems[1488]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:48:49.041787 extend-filesystems[1488]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:48:49.041787 extend-filesystems[1488]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:48:49.046955 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:48:49.043670 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:48:49.047154 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Jul 6 23:48:49.046149 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:48:49.049869 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:48:49.050156 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:48:49.122792 containerd[1469]: time="2025-07-06T23:48:49.122670571Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:48:49.146035 containerd[1469]: time="2025-07-06T23:48:49.145932898Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:48:49.148121 containerd[1469]: time="2025-07-06T23:48:49.148068021Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:48:49.148121 containerd[1469]: time="2025-07-06T23:48:49.148104680Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:48:49.148180 containerd[1469]: time="2025-07-06T23:48:49.148125269Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:48:49.148433 containerd[1469]: time="2025-07-06T23:48:49.148401587Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:48:49.148458 containerd[1469]: time="2025-07-06T23:48:49.148440951Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:48:49.148583 containerd[1469]: time="2025-07-06T23:48:49.148558711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:48:49.148619 containerd[1469]: time="2025-07-06T23:48:49.148581634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:48:49.148913 containerd[1469]: time="2025-07-06T23:48:49.148875285Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:48:49.148913 containerd[1469]: time="2025-07-06T23:48:49.148902586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:48:49.148951 containerd[1469]: time="2025-07-06T23:48:49.148920219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:48:49.148951 containerd[1469]: time="2025-07-06T23:48:49.148934306Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:48:49.149083 containerd[1469]: time="2025-07-06T23:48:49.149055142Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:48:49.149454 containerd[1469]: time="2025-07-06T23:48:49.149422842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:48:49.149633 containerd[1469]: time="2025-07-06T23:48:49.149608049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:48:49.149671 containerd[1469]: time="2025-07-06T23:48:49.149632775Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:48:49.149810 containerd[1469]: time="2025-07-06T23:48:49.149778368Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:48:49.149890 containerd[1469]: time="2025-07-06T23:48:49.149869259Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:48:49.156347 containerd[1469]: time="2025-07-06T23:48:49.156258879Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:48:49.156347 containerd[1469]: time="2025-07-06T23:48:49.156352745Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:48:49.156552 containerd[1469]: time="2025-07-06T23:48:49.156372482Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:48:49.156552 containerd[1469]: time="2025-07-06T23:48:49.156399092Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:48:49.156552 containerd[1469]: time="2025-07-06T23:48:49.156417426Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:48:49.156720 containerd[1469]: time="2025-07-06T23:48:49.156684447Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:48:49.157017 containerd[1469]: time="2025-07-06T23:48:49.156979039Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:48:49.157137 containerd[1469]: time="2025-07-06T23:48:49.157102200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:48:49.157137 containerd[1469]: time="2025-07-06T23:48:49.157126225Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:48:49.157192 containerd[1469]: time="2025-07-06T23:48:49.157141143Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:48:49.157192 containerd[1469]: time="2025-07-06T23:48:49.157160009Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:48:49.157192 containerd[1469]: time="2025-07-06T23:48:49.157175057Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:48:49.157192 containerd[1469]: time="2025-07-06T23:48:49.157190326Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:48:49.157294 containerd[1469]: time="2025-07-06T23:48:49.157206596Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:48:49.157294 containerd[1469]: time="2025-07-06T23:48:49.157234559Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:48:49.157294 containerd[1469]: time="2025-07-06T23:48:49.157253103Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:48:49.157294 containerd[1469]: time="2025-07-06T23:48:49.157268352Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:48:49.157294 containerd[1469]: time="2025-07-06T23:48:49.157283400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:48:49.157415 containerd[1469]: time="2025-07-06T23:48:49.157306894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157415 containerd[1469]: time="2025-07-06T23:48:49.157325088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157415 containerd[1469]: time="2025-07-06T23:48:49.157340657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157415 containerd[1469]: time="2025-07-06T23:48:49.157354934Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157415 containerd[1469]: time="2025-07-06T23:48:49.157368560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157415 containerd[1469]: time="2025-07-06T23:48:49.157383949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157415 containerd[1469]: time="2025-07-06T23:48:49.157397865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157415 containerd[1469]: time="2025-07-06T23:48:49.157412743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157624 containerd[1469]: time="2025-07-06T23:48:49.157429093Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157624 containerd[1469]: time="2025-07-06T23:48:49.157446095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157624 containerd[1469]: time="2025-07-06T23:48:49.157459701Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157624 containerd[1469]: time="2025-07-06T23:48:49.157476642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157624 containerd[1469]: time="2025-07-06T23:48:49.157491841Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157624 containerd[1469]: time="2025-07-06T23:48:49.157508853Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:48:49.157624 containerd[1469]: time="2025-07-06T23:48:49.157530974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157624 containerd[1469]: time="2025-07-06T23:48:49.157568415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157624 containerd[1469]: time="2025-07-06T23:48:49.157581880Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:48:49.157845 containerd[1469]: time="2025-07-06T23:48:49.157638807Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:48:49.157845 containerd[1469]: time="2025-07-06T23:48:49.157661279Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:48:49.157845 containerd[1469]: time="2025-07-06T23:48:49.157676367Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:48:49.157845 containerd[1469]: time="2025-07-06T23:48:49.157691445Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:48:49.157845 containerd[1469]: time="2025-07-06T23:48:49.157704009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.157845 containerd[1469]: time="2025-07-06T23:48:49.157720560Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:48:49.157845 containerd[1469]: time="2025-07-06T23:48:49.157734195Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:48:49.157845 containerd[1469]: time="2025-07-06T23:48:49.157754293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:48:49.158153 containerd[1469]: time="2025-07-06T23:48:49.158062281Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:48:49.158153 containerd[1469]: time="2025-07-06T23:48:49.158139305Z" level=info msg="Connect containerd service" Jul 6 23:48:49.158352 containerd[1469]: time="2025-07-06T23:48:49.158196673Z" level=info msg="using legacy CRI server" Jul 6 23:48:49.158352 containerd[1469]: time="2025-07-06T23:48:49.158206411Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:48:49.158352 containerd[1469]: time="2025-07-06T23:48:49.158300828Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:48:49.159165 containerd[1469]: time="2025-07-06T23:48:49.159124041Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:48:49.159343 containerd[1469]: time="2025-07-06T23:48:49.159287107Z" level=info msg="Start subscribing containerd event" Jul 6 23:48:49.159375 containerd[1469]: time="2025-07-06T23:48:49.159361867Z" level=info msg="Start recovering state" Jul 6 23:48:49.159463 containerd[1469]: time="2025-07-06T23:48:49.159441266Z" level=info msg="Start event monitor" Jul 6 23:48:49.159492 containerd[1469]: time="2025-07-06T23:48:49.159475981Z" level=info msg="Start snapshots syncer" Jul 6 23:48:49.159492 containerd[1469]: time="2025-07-06T23:48:49.159487703Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:48:49.159548 containerd[1469]: time="2025-07-06T23:48:49.159496890Z" level=info msg="Start streaming server" Jul 6 23:48:49.159628 containerd[1469]: time="2025-07-06T23:48:49.159602869Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:48:49.159687 containerd[1469]: time="2025-07-06T23:48:49.159670436Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:48:49.159890 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:48:49.160268 containerd[1469]: time="2025-07-06T23:48:49.160238902Z" level=info msg="containerd successfully booted in 0.039193s" Jul 6 23:48:49.226706 systemd-networkd[1393]: eth0: Gained IPv6LL Jul 6 23:48:49.230235 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:48:49.231947 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:48:49.243764 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:48:49.246175 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:48:49.248367 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:48:49.268119 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:48:49.268388 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:48:49.269966 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:48:49.275037 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:48:49.972624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:48:49.974439 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:48:49.978076 systemd[1]: Startup finished in 1.061s (kernel) + 6.203s (initrd) + 4.453s (userspace) = 11.718s. Jul 6 23:48:49.978349 (kubelet)[1556]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:48:51.200101 kubelet[1556]: E0706 23:48:51.199955 1556 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:48:51.204967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:48:51.205206 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:48:51.205715 systemd[1]: kubelet.service: Consumed 1.832s CPU time. Jul 6 23:48:52.512597 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:48:52.514409 systemd[1]: Started sshd@0-10.0.0.53:22-10.0.0.1:42746.service - OpenSSH per-connection server daemon (10.0.0.1:42746). Jul 6 23:48:52.575972 sshd[1570]: Accepted publickey for core from 10.0.0.1 port 42746 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:48:52.578352 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:52.590570 systemd-logind[1450]: New session 1 of user core. Jul 6 23:48:52.592553 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:48:52.605878 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:48:52.622040 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:48:52.625657 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:48:52.634229 (systemd)[1574]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:48:52.777169 systemd[1574]: Queued start job for default target default.target. Jul 6 23:48:52.786062 systemd[1574]: Created slice app.slice - User Application Slice. Jul 6 23:48:52.786090 systemd[1574]: Reached target paths.target - Paths. Jul 6 23:48:52.786103 systemd[1574]: Reached target timers.target - Timers. Jul 6 23:48:52.787967 systemd[1574]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:48:52.801617 systemd[1574]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:48:52.801760 systemd[1574]: Reached target sockets.target - Sockets. Jul 6 23:48:52.801778 systemd[1574]: Reached target basic.target - Basic System. Jul 6 23:48:52.801817 systemd[1574]: Reached target default.target - Main User Target. Jul 6 23:48:52.801855 systemd[1574]: Startup finished in 159ms. Jul 6 23:48:52.802265 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:48:52.803914 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:48:52.863554 systemd[1]: Started sshd@1-10.0.0.53:22-10.0.0.1:42758.service - OpenSSH per-connection server daemon (10.0.0.1:42758). Jul 6 23:48:52.899906 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 42758 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:48:52.901475 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:52.905245 systemd-logind[1450]: New session 2 of user core. Jul 6 23:48:52.913660 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:48:52.967348 sshd[1585]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:52.979486 systemd[1]: sshd@1-10.0.0.53:22-10.0.0.1:42758.service: Deactivated successfully. Jul 6 23:48:52.981094 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:48:52.983151 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:48:52.989927 systemd[1]: Started sshd@2-10.0.0.53:22-10.0.0.1:42760.service - OpenSSH per-connection server daemon (10.0.0.1:42760). Jul 6 23:48:52.991770 systemd-logind[1450]: Removed session 2. Jul 6 23:48:53.016504 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 42760 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:48:53.018008 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:53.021963 systemd-logind[1450]: New session 3 of user core. Jul 6 23:48:53.031653 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:48:53.082131 sshd[1592]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:53.096046 systemd[1]: sshd@2-10.0.0.53:22-10.0.0.1:42760.service: Deactivated successfully. Jul 6 23:48:53.097654 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:48:53.098967 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:48:53.109782 systemd[1]: Started sshd@3-10.0.0.53:22-10.0.0.1:42764.service - OpenSSH per-connection server daemon (10.0.0.1:42764). Jul 6 23:48:53.110627 systemd-logind[1450]: Removed session 3. Jul 6 23:48:53.137033 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 42764 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:48:53.138576 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:53.142285 systemd-logind[1450]: New session 4 of user core. Jul 6 23:48:53.152646 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:48:53.206648 sshd[1599]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:53.219060 systemd[1]: sshd@3-10.0.0.53:22-10.0.0.1:42764.service: Deactivated successfully. Jul 6 23:48:53.220809 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:48:53.222472 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:48:53.227784 systemd[1]: Started sshd@4-10.0.0.53:22-10.0.0.1:42778.service - OpenSSH per-connection server daemon (10.0.0.1:42778). Jul 6 23:48:53.228477 systemd-logind[1450]: Removed session 4. Jul 6 23:48:53.255185 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 42778 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:48:53.256662 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:53.260361 systemd-logind[1450]: New session 5 of user core. Jul 6 23:48:53.269669 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:48:53.327843 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:48:53.328204 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:53.350587 sudo[1609]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:53.352298 sshd[1606]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:53.365476 systemd[1]: sshd@4-10.0.0.53:22-10.0.0.1:42778.service: Deactivated successfully. Jul 6 23:48:53.367345 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:48:53.368705 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:48:53.383800 systemd[1]: Started sshd@5-10.0.0.53:22-10.0.0.1:42786.service - OpenSSH per-connection server daemon (10.0.0.1:42786). Jul 6 23:48:53.384739 systemd-logind[1450]: Removed session 5. Jul 6 23:48:53.410971 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 42786 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:48:53.412865 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:53.416662 systemd-logind[1450]: New session 6 of user core. Jul 6 23:48:53.426649 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:48:53.482461 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:48:53.482955 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:53.487271 sudo[1618]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:53.494233 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:48:53.494595 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:53.520758 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:48:53.522500 auditctl[1621]: No rules Jul 6 23:48:53.523775 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:48:53.524037 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:48:53.525950 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:48:53.558163 augenrules[1639]: No rules Jul 6 23:48:53.560059 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:48:53.561425 sudo[1617]: pam_unix(sudo:session): session closed for user root Jul 6 23:48:53.563207 sshd[1614]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:53.573324 systemd[1]: sshd@5-10.0.0.53:22-10.0.0.1:42786.service: Deactivated successfully. Jul 6 23:48:53.575188 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:48:53.576880 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:48:53.578208 systemd[1]: Started sshd@6-10.0.0.53:22-10.0.0.1:42792.service - OpenSSH per-connection server daemon (10.0.0.1:42792). Jul 6 23:48:53.579328 systemd-logind[1450]: Removed session 6. Jul 6 23:48:53.609297 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 42792 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:48:53.610718 sshd[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:53.614669 systemd-logind[1450]: New session 7 of user core. Jul 6 23:48:53.624672 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:48:53.680091 sudo[1650]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:48:53.680431 sudo[1650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:48:54.273777 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:48:54.273960 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:48:54.788426 dockerd[1667]: time="2025-07-06T23:48:54.788360829Z" level=info msg="Starting up" Jul 6 23:48:55.295163 dockerd[1667]: time="2025-07-06T23:48:55.295111997Z" level=info msg="Loading containers: start." Jul 6 23:48:55.409560 kernel: Initializing XFRM netlink socket Jul 6 23:48:55.489804 systemd-networkd[1393]: docker0: Link UP Jul 6 23:48:55.510043 dockerd[1667]: time="2025-07-06T23:48:55.509996425Z" level=info msg="Loading containers: done." Jul 6 23:48:55.524788 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3306139388-merged.mount: Deactivated successfully. Jul 6 23:48:55.525473 dockerd[1667]: time="2025-07-06T23:48:55.525427549Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:48:55.525597 dockerd[1667]: time="2025-07-06T23:48:55.525577039Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:48:55.525751 dockerd[1667]: time="2025-07-06T23:48:55.525725868Z" level=info msg="Daemon has completed initialization" Jul 6 23:48:55.560590 dockerd[1667]: time="2025-07-06T23:48:55.560246855Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:48:55.560468 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:48:56.267571 containerd[1469]: time="2025-07-06T23:48:56.267504322Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:48:56.939377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount603430165.mount: Deactivated successfully. Jul 6 23:48:57.950272 containerd[1469]: time="2025-07-06T23:48:57.950197231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:57.951037 containerd[1469]: time="2025-07-06T23:48:57.951010696Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jul 6 23:48:57.952174 containerd[1469]: time="2025-07-06T23:48:57.952147608Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:57.954800 containerd[1469]: time="2025-07-06T23:48:57.954766749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:57.955854 containerd[1469]: time="2025-07-06T23:48:57.955826316Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.688258123s" Jul 6 23:48:57.955916 containerd[1469]: time="2025-07-06T23:48:57.955858466Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 6 23:48:57.956550 containerd[1469]: time="2025-07-06T23:48:57.956502954Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:48:59.429717 containerd[1469]: time="2025-07-06T23:48:59.429660204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:59.430363 containerd[1469]: time="2025-07-06T23:48:59.430316354Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jul 6 23:48:59.431409 containerd[1469]: time="2025-07-06T23:48:59.431383235Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:59.434200 containerd[1469]: time="2025-07-06T23:48:59.434170891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:48:59.435197 containerd[1469]: time="2025-07-06T23:48:59.435166198Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.478634479s" Jul 6 23:48:59.435197 containerd[1469]: time="2025-07-06T23:48:59.435197536Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 6 23:48:59.435714 containerd[1469]: time="2025-07-06T23:48:59.435688637Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:49:01.295161 containerd[1469]: time="2025-07-06T23:49:01.295075227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:01.297157 containerd[1469]: time="2025-07-06T23:49:01.297047495Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jul 6 23:49:01.298875 containerd[1469]: time="2025-07-06T23:49:01.298769914Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:01.306783 containerd[1469]: time="2025-07-06T23:49:01.306710483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:01.308137 containerd[1469]: time="2025-07-06T23:49:01.307982207Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.872264255s" Jul 6 23:49:01.308137 containerd[1469]: time="2025-07-06T23:49:01.308035377Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 6 23:49:01.308814 containerd[1469]: time="2025-07-06T23:49:01.308714651Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:49:01.455573 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:49:01.463840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:49:01.651140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:49:01.656933 (kubelet)[1886]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:49:01.957253 kubelet[1886]: E0706 23:49:01.957027 1886 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:49:01.964670 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:49:01.964952 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:49:03.259226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2693594317.mount: Deactivated successfully. Jul 6 23:49:05.277118 containerd[1469]: time="2025-07-06T23:49:05.277035852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:05.297450 containerd[1469]: time="2025-07-06T23:49:05.297343639Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 6 23:49:05.301892 containerd[1469]: time="2025-07-06T23:49:05.301834760Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:05.317992 containerd[1469]: time="2025-07-06T23:49:05.317953051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:05.318710 containerd[1469]: time="2025-07-06T23:49:05.318647544Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 4.009878892s" Jul 6 23:49:05.318710 containerd[1469]: time="2025-07-06T23:49:05.318702396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 6 23:49:05.319237 containerd[1469]: time="2025-07-06T23:49:05.319149274Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:49:06.056344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413948126.mount: Deactivated successfully. Jul 6 23:49:08.195571 containerd[1469]: time="2025-07-06T23:49:08.195444377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:08.196281 containerd[1469]: time="2025-07-06T23:49:08.196235841Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 6 23:49:08.197644 containerd[1469]: time="2025-07-06T23:49:08.197603826Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:08.201067 containerd[1469]: time="2025-07-06T23:49:08.201035621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:08.202592 containerd[1469]: time="2025-07-06T23:49:08.202525955Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.883335373s" Jul 6 23:49:08.202639 containerd[1469]: time="2025-07-06T23:49:08.202597629Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 6 23:49:08.203214 containerd[1469]: time="2025-07-06T23:49:08.203143763Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:49:10.730389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320918255.mount: Deactivated successfully. Jul 6 23:49:10.914316 containerd[1469]: time="2025-07-06T23:49:10.914197348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:10.919823 containerd[1469]: time="2025-07-06T23:49:10.919754428Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:49:10.922184 containerd[1469]: time="2025-07-06T23:49:10.922111487Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:10.929511 containerd[1469]: time="2025-07-06T23:49:10.929445558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:10.930762 containerd[1469]: time="2025-07-06T23:49:10.930671567Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.727488891s" Jul 6 23:49:10.930762 containerd[1469]: time="2025-07-06T23:49:10.930744033Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:49:10.931366 containerd[1469]: time="2025-07-06T23:49:10.931338147Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:49:11.498145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2038592866.mount: Deactivated successfully. Jul 6 23:49:12.215150 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:49:12.226690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:49:12.400494 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:49:12.405408 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:49:12.586503 kubelet[2015]: E0706 23:49:12.586336 2015 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:49:12.591726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:49:12.592038 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:49:16.173224 containerd[1469]: time="2025-07-06T23:49:16.173144142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:16.354319 containerd[1469]: time="2025-07-06T23:49:16.354199290Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jul 6 23:49:16.433313 containerd[1469]: time="2025-07-06T23:49:16.433127270Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:16.475682 containerd[1469]: time="2025-07-06T23:49:16.475621918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:16.477097 containerd[1469]: time="2025-07-06T23:49:16.477031851Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.545658288s" Jul 6 23:49:16.477097 containerd[1469]: time="2025-07-06T23:49:16.477072878Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 6 23:49:19.042200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:49:19.053776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:49:19.079200 systemd[1]: Reloading requested from client PID 2059 ('systemctl') (unit session-7.scope)... Jul 6 23:49:19.079217 systemd[1]: Reloading... Jul 6 23:49:19.183676 zram_generator::config[2101]: No configuration found. Jul 6 23:49:19.693445 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:49:19.772084 systemd[1]: Reloading finished in 692 ms. Jul 6 23:49:19.828716 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:49:19.828816 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:49:19.829124 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:49:19.831942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:49:20.004503 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:49:20.009851 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:49:20.048141 kubelet[2147]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:49:20.048141 kubelet[2147]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:49:20.048141 kubelet[2147]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:49:20.048606 kubelet[2147]: I0706 23:49:20.048178 2147 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:49:20.371007 kubelet[2147]: I0706 23:49:20.370868 2147 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:49:20.371007 kubelet[2147]: I0706 23:49:20.370900 2147 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:49:20.371193 kubelet[2147]: I0706 23:49:20.371165 2147 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:49:20.390531 kubelet[2147]: E0706 23:49:20.390466 2147 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:49:20.394435 kubelet[2147]: I0706 23:49:20.394398 2147 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:49:20.402589 kubelet[2147]: E0706 23:49:20.402547 2147 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:49:20.402589 kubelet[2147]: I0706 23:49:20.402586 2147 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:49:20.409457 kubelet[2147]: I0706 23:49:20.409409 2147 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:49:20.410238 kubelet[2147]: I0706 23:49:20.410202 2147 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:49:20.410436 kubelet[2147]: I0706 23:49:20.410401 2147 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:49:20.410667 kubelet[2147]: I0706 23:49:20.410427 2147 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:49:20.410810 kubelet[2147]: I0706 23:49:20.410685 2147 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:49:20.410810 kubelet[2147]: I0706 23:49:20.410700 2147 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:49:20.410902 kubelet[2147]: I0706 23:49:20.410883 2147 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:49:20.413815 kubelet[2147]: I0706 23:49:20.413770 2147 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:49:20.413815 kubelet[2147]: I0706 23:49:20.413800 2147 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:49:20.413971 kubelet[2147]: I0706 23:49:20.413841 2147 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:49:20.413971 kubelet[2147]: I0706 23:49:20.413869 2147 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:49:20.417035 kubelet[2147]: I0706 23:49:20.416982 2147 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:49:20.417525 kubelet[2147]: I0706 23:49:20.417486 2147 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:49:20.417675 kubelet[2147]: W0706 23:49:20.417651 2147 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:49:20.418507 kubelet[2147]: W0706 23:49:20.418446 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 6 23:49:20.421002 kubelet[2147]: I0706 23:49:20.420848 2147 server.go:1274] "Started kubelet" Jul 6 23:49:20.421616 kubelet[2147]: E0706 23:49:20.418511 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:49:20.422609 kubelet[2147]: W0706 23:49:20.422569 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 6 23:49:20.422757 kubelet[2147]: E0706 23:49:20.422707 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:49:20.425165 kubelet[2147]: I0706 23:49:20.422795 2147 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:49:20.425165 kubelet[2147]: I0706 23:49:20.423238 2147 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:49:20.425165 kubelet[2147]: I0706 23:49:20.422811 2147 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:49:20.454495 kubelet[2147]: I0706 23:49:20.454042 2147 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:49:20.455226 kubelet[2147]: I0706 23:49:20.455200 2147 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:49:20.457773 kubelet[2147]: I0706 23:49:20.456104 2147 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:49:20.458161 kubelet[2147]: E0706 23:49:20.458109 2147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:49:20.458258 kubelet[2147]: I0706 23:49:20.458236 2147 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:49:20.458368 kubelet[2147]: I0706 23:49:20.458356 2147 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:49:20.458418 kubelet[2147]: I0706 23:49:20.458407 2147 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:49:20.458453 kubelet[2147]: E0706 23:49:20.458409 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="200ms" Jul 6 23:49:20.458754 kubelet[2147]: W0706 23:49:20.458711 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 6 23:49:20.458902 kubelet[2147]: E0706 23:49:20.458760 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:49:20.459698 kubelet[2147]: E0706 23:49:20.456875 2147 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.53:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.53:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fce6ffb2843a2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:49:20.42082397 +0000 UTC m=+0.406751530,LastTimestamp:2025-07-06 23:49:20.42082397 +0000 UTC m=+0.406751530,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:49:20.460065 kubelet[2147]: I0706 23:49:20.460042 2147 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:49:20.464125 kubelet[2147]: I0706 23:49:20.463969 2147 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:49:20.464125 kubelet[2147]: I0706 23:49:20.463996 2147 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:49:20.464125 kubelet[2147]: E0706 23:49:20.463995 2147 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:49:20.477448 kubelet[2147]: I0706 23:49:20.477419 2147 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:49:20.477448 kubelet[2147]: I0706 23:49:20.477434 2147 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:49:20.477448 kubelet[2147]: I0706 23:49:20.477450 2147 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:49:20.559169 kubelet[2147]: E0706 23:49:20.559115 2147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:49:20.659495 kubelet[2147]: E0706 23:49:20.659293 2147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:49:20.659495 kubelet[2147]: E0706 23:49:20.659317 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="400ms" Jul 6 23:49:20.759683 kubelet[2147]: E0706 23:49:20.759607 2147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:49:20.860083 kubelet[2147]: E0706 23:49:20.860034 2147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:49:20.960265 kubelet[2147]: E0706 23:49:20.960109 2147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:49:21.045247 kubelet[2147]: I0706 23:49:21.045181 2147 policy_none.go:49] "None policy: Start" Jul 6 23:49:21.046389 kubelet[2147]: I0706 23:49:21.046358 2147 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:49:21.046782 kubelet[2147]: I0706 23:49:21.046493 2147 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:49:21.049706 kubelet[2147]: I0706 23:49:21.049630 2147 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:49:21.052192 kubelet[2147]: I0706 23:49:21.052110 2147 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:49:21.052192 kubelet[2147]: I0706 23:49:21.052188 2147 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:49:21.052271 kubelet[2147]: I0706 23:49:21.052217 2147 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:49:21.052295 kubelet[2147]: E0706 23:49:21.052265 2147 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:49:21.053553 kubelet[2147]: W0706 23:49:21.053127 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 6 23:49:21.053553 kubelet[2147]: E0706 23:49:21.053196 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:49:21.057026 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:49:21.060209 kubelet[2147]: E0706 23:49:21.060168 2147 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:49:21.060361 kubelet[2147]: E0706 23:49:21.060289 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="800ms" Jul 6 23:49:21.075807 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:49:21.078692 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:49:21.095711 kubelet[2147]: I0706 23:49:21.095627 2147 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:49:21.095900 kubelet[2147]: I0706 23:49:21.095887 2147 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:49:21.095958 kubelet[2147]: I0706 23:49:21.095903 2147 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:49:21.096238 kubelet[2147]: I0706 23:49:21.096217 2147 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:49:21.097065 kubelet[2147]: E0706 23:49:21.096995 2147 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:49:21.160572 kubelet[2147]: I0706 23:49:21.160476 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3697458fb4359a2d45c33935cc5193a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3697458fb4359a2d45c33935cc5193a8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:49:21.160572 kubelet[2147]: I0706 23:49:21.160526 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:49:21.160572 kubelet[2147]: I0706 23:49:21.160567 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:49:21.160871 kubelet[2147]: I0706 23:49:21.160583 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:49:21.160871 kubelet[2147]: I0706 23:49:21.160597 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:49:21.160871 kubelet[2147]: I0706 23:49:21.160610 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3697458fb4359a2d45c33935cc5193a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3697458fb4359a2d45c33935cc5193a8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:49:21.160871 kubelet[2147]: I0706 23:49:21.160622 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3697458fb4359a2d45c33935cc5193a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3697458fb4359a2d45c33935cc5193a8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:49:21.160871 kubelet[2147]: I0706 23:49:21.160635 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:49:21.161008 kubelet[2147]: I0706 23:49:21.160667 2147 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:49:21.162168 systemd[1]: Created slice kubepods-burstable-pod3697458fb4359a2d45c33935cc5193a8.slice - libcontainer container kubepods-burstable-pod3697458fb4359a2d45c33935cc5193a8.slice. Jul 6 23:49:21.188413 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 6 23:49:21.197137 kubelet[2147]: I0706 23:49:21.197092 2147 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:49:21.197523 kubelet[2147]: E0706 23:49:21.197480 2147 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jul 6 23:49:21.199725 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 6 23:49:21.309569 kubelet[2147]: W0706 23:49:21.309371 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 6 23:49:21.309569 kubelet[2147]: E0706 23:49:21.309445 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.53:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:49:21.385837 kubelet[2147]: W0706 23:49:21.385781 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 6 23:49:21.385837 kubelet[2147]: E0706 23:49:21.385840 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:49:21.399570 kubelet[2147]: I0706 23:49:21.399527 2147 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:49:21.399947 kubelet[2147]: E0706 23:49:21.399916 2147 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jul 6 23:49:21.486033 kubelet[2147]: E0706 23:49:21.485929 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:21.486810 containerd[1469]: time="2025-07-06T23:49:21.486762802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3697458fb4359a2d45c33935cc5193a8,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:21.498134 kubelet[2147]: E0706 23:49:21.498083 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:21.498711 containerd[1469]: time="2025-07-06T23:49:21.498666969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:21.502044 kubelet[2147]: E0706 23:49:21.502007 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:21.502568 containerd[1469]: time="2025-07-06T23:49:21.502508408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:21.564783 kubelet[2147]: W0706 23:49:21.564514 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 6 23:49:21.564783 kubelet[2147]: E0706 23:49:21.564689 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:49:21.802302 kubelet[2147]: I0706 23:49:21.802248 2147 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:49:21.802762 kubelet[2147]: E0706 23:49:21.802708 2147 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jul 6 23:49:21.861796 kubelet[2147]: E0706 23:49:21.861621 2147 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.53:6443: connect: connection refused" interval="1.6s" Jul 6 23:49:22.016265 kubelet[2147]: W0706 23:49:22.016180 2147 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.53:6443: connect: connection refused Jul 6 23:49:22.016265 kubelet[2147]: E0706 23:49:22.016251 2147 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:49:22.051025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount377940567.mount: Deactivated successfully. Jul 6 23:49:22.058802 containerd[1469]: time="2025-07-06T23:49:22.058748575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:49:22.059792 containerd[1469]: time="2025-07-06T23:49:22.059762084Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:49:22.060644 containerd[1469]: time="2025-07-06T23:49:22.060590917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:49:22.061517 containerd[1469]: time="2025-07-06T23:49:22.061474486Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:49:22.062316 containerd[1469]: time="2025-07-06T23:49:22.062260738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:49:22.063173 containerd[1469]: time="2025-07-06T23:49:22.063131602Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:49:22.064118 containerd[1469]: time="2025-07-06T23:49:22.064068294Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:49:22.068332 containerd[1469]: time="2025-07-06T23:49:22.068281044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:49:22.069276 containerd[1469]: time="2025-07-06T23:49:22.069231080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.383696ms" Jul 6 23:49:22.070966 containerd[1469]: time="2025-07-06T23:49:22.070915269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 568.296078ms" Jul 6 23:49:22.072389 containerd[1469]: time="2025-07-06T23:49:22.072348205Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 573.597894ms" Jul 6 23:49:22.340437 containerd[1469]: time="2025-07-06T23:49:22.340072311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:22.340437 containerd[1469]: time="2025-07-06T23:49:22.340132156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:22.340437 containerd[1469]: time="2025-07-06T23:49:22.340146054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:22.342054 containerd[1469]: time="2025-07-06T23:49:22.341468827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:22.343602 containerd[1469]: time="2025-07-06T23:49:22.342916019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:22.343602 containerd[1469]: time="2025-07-06T23:49:22.343008778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:22.343602 containerd[1469]: time="2025-07-06T23:49:22.343029428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:22.343602 containerd[1469]: time="2025-07-06T23:49:22.343314555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:22.343729 containerd[1469]: time="2025-07-06T23:49:22.343267926Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:22.343729 containerd[1469]: time="2025-07-06T23:49:22.343321890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:22.343729 containerd[1469]: time="2025-07-06T23:49:22.343342990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:22.343729 containerd[1469]: time="2025-07-06T23:49:22.343446470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:22.390926 systemd[1]: Started cri-containerd-fde1ce32f0c8f0a9ba4af7cf6019737636b335b38a237b5dc4eab0fcece03a43.scope - libcontainer container fde1ce32f0c8f0a9ba4af7cf6019737636b335b38a237b5dc4eab0fcece03a43. Jul 6 23:49:22.395731 systemd[1]: Started cri-containerd-77639d93fb47d17f948e2224512136b1cf0fec0cb602230baa07e41bf8a11f4d.scope - libcontainer container 77639d93fb47d17f948e2224512136b1cf0fec0cb602230baa07e41bf8a11f4d. Jul 6 23:49:22.398561 systemd[1]: Started cri-containerd-9beb48ddd71b9a5927bde9dfd3924834e6a2d962fc4289e3e9aca0353eda6302.scope - libcontainer container 9beb48ddd71b9a5927bde9dfd3924834e6a2d962fc4289e3e9aca0353eda6302. Jul 6 23:49:22.457397 containerd[1469]: time="2025-07-06T23:49:22.457324727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"9beb48ddd71b9a5927bde9dfd3924834e6a2d962fc4289e3e9aca0353eda6302\"" Jul 6 23:49:22.458800 kubelet[2147]: E0706 23:49:22.458767 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:22.461218 containerd[1469]: time="2025-07-06T23:49:22.461181643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3697458fb4359a2d45c33935cc5193a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"77639d93fb47d17f948e2224512136b1cf0fec0cb602230baa07e41bf8a11f4d\"" Jul 6 23:49:22.461785 containerd[1469]: time="2025-07-06T23:49:22.461754544Z" level=info msg="CreateContainer within sandbox \"9beb48ddd71b9a5927bde9dfd3924834e6a2d962fc4289e3e9aca0353eda6302\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:49:22.462053 kubelet[2147]: E0706 23:49:22.462018 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:22.464292 containerd[1469]: time="2025-07-06T23:49:22.464248551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"fde1ce32f0c8f0a9ba4af7cf6019737636b335b38a237b5dc4eab0fcece03a43\"" Jul 6 23:49:22.465258 containerd[1469]: time="2025-07-06T23:49:22.464691232Z" level=info msg="CreateContainer within sandbox \"77639d93fb47d17f948e2224512136b1cf0fec0cb602230baa07e41bf8a11f4d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:49:22.465861 kubelet[2147]: E0706 23:49:22.465836 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:22.467458 containerd[1469]: time="2025-07-06T23:49:22.467398037Z" level=info msg="CreateContainer within sandbox \"fde1ce32f0c8f0a9ba4af7cf6019737636b335b38a237b5dc4eab0fcece03a43\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:49:22.494256 containerd[1469]: time="2025-07-06T23:49:22.494183039Z" level=info msg="CreateContainer within sandbox \"9beb48ddd71b9a5927bde9dfd3924834e6a2d962fc4289e3e9aca0353eda6302\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3541d5a47669176734dfd76e67c9f70a8bb8c3c4e7fbc2ef8960a406192c787e\"" Jul 6 23:49:22.494940 containerd[1469]: time="2025-07-06T23:49:22.494902733Z" level=info msg="StartContainer for \"3541d5a47669176734dfd76e67c9f70a8bb8c3c4e7fbc2ef8960a406192c787e\"" Jul 6 23:49:22.509283 containerd[1469]: time="2025-07-06T23:49:22.509173863Z" level=info msg="CreateContainer within sandbox \"fde1ce32f0c8f0a9ba4af7cf6019737636b335b38a237b5dc4eab0fcece03a43\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"221b94f420b41112421bfda7e2c8d99bf4f28b134968a62f3991ccb68bed3119\"" Jul 6 23:49:22.509859 containerd[1469]: time="2025-07-06T23:49:22.509811219Z" level=info msg="StartContainer for \"221b94f420b41112421bfda7e2c8d99bf4f28b134968a62f3991ccb68bed3119\"" Jul 6 23:49:22.510302 containerd[1469]: time="2025-07-06T23:49:22.510164228Z" level=info msg="CreateContainer within sandbox \"77639d93fb47d17f948e2224512136b1cf0fec0cb602230baa07e41bf8a11f4d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e07dfee44829485ac77c4d4940e1255591ddcc7a6430d9dc672331be326c8abc\"" Jul 6 23:49:22.510730 containerd[1469]: time="2025-07-06T23:49:22.510696151Z" level=info msg="StartContainer for \"e07dfee44829485ac77c4d4940e1255591ddcc7a6430d9dc672331be326c8abc\"" Jul 6 23:49:22.528874 systemd[1]: Started cri-containerd-3541d5a47669176734dfd76e67c9f70a8bb8c3c4e7fbc2ef8960a406192c787e.scope - libcontainer container 3541d5a47669176734dfd76e67c9f70a8bb8c3c4e7fbc2ef8960a406192c787e. Jul 6 23:49:22.546690 systemd[1]: Started cri-containerd-e07dfee44829485ac77c4d4940e1255591ddcc7a6430d9dc672331be326c8abc.scope - libcontainer container e07dfee44829485ac77c4d4940e1255591ddcc7a6430d9dc672331be326c8abc. Jul 6 23:49:22.549930 systemd[1]: Started cri-containerd-221b94f420b41112421bfda7e2c8d99bf4f28b134968a62f3991ccb68bed3119.scope - libcontainer container 221b94f420b41112421bfda7e2c8d99bf4f28b134968a62f3991ccb68bed3119. Jul 6 23:49:22.550815 kubelet[2147]: E0706 23:49:22.550726 2147 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.53:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:49:22.590289 containerd[1469]: time="2025-07-06T23:49:22.590176448Z" level=info msg="StartContainer for \"3541d5a47669176734dfd76e67c9f70a8bb8c3c4e7fbc2ef8960a406192c787e\" returns successfully" Jul 6 23:49:22.598689 containerd[1469]: time="2025-07-06T23:49:22.598493480Z" level=info msg="StartContainer for \"e07dfee44829485ac77c4d4940e1255591ddcc7a6430d9dc672331be326c8abc\" returns successfully" Jul 6 23:49:22.605189 containerd[1469]: time="2025-07-06T23:49:22.605139760Z" level=info msg="StartContainer for \"221b94f420b41112421bfda7e2c8d99bf4f28b134968a62f3991ccb68bed3119\" returns successfully" Jul 6 23:49:22.605912 kubelet[2147]: I0706 23:49:22.605457 2147 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:49:22.606144 kubelet[2147]: E0706 23:49:22.606111 2147 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.53:6443/api/v1/nodes\": dial tcp 10.0.0.53:6443: connect: connection refused" node="localhost" Jul 6 23:49:23.060843 kubelet[2147]: E0706 23:49:23.060587 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:23.060843 kubelet[2147]: E0706 23:49:23.060687 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:23.063046 kubelet[2147]: E0706 23:49:23.062977 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:24.065208 kubelet[2147]: E0706 23:49:24.065161 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:24.209572 kubelet[2147]: I0706 23:49:24.209526 2147 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:49:24.416201 kubelet[2147]: I0706 23:49:24.416063 2147 apiserver.go:52] "Watching apiserver" Jul 6 23:49:24.416311 kubelet[2147]: E0706 23:49:24.416232 2147 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:49:24.459344 kubelet[2147]: I0706 23:49:24.459304 2147 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:49:24.463796 kubelet[2147]: I0706 23:49:24.463755 2147 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:49:24.463796 kubelet[2147]: E0706 23:49:24.463781 2147 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 6 23:49:25.069722 kubelet[2147]: E0706 23:49:25.069675 2147 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 6 23:49:25.070153 kubelet[2147]: E0706 23:49:25.069851 2147 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:26.449215 systemd[1]: Reloading requested from client PID 2427 ('systemctl') (unit session-7.scope)... Jul 6 23:49:26.449234 systemd[1]: Reloading... Jul 6 23:49:26.539640 zram_generator::config[2470]: No configuration found. Jul 6 23:49:26.646444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:49:26.739581 systemd[1]: Reloading finished in 289 ms. Jul 6 23:49:26.787910 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:49:26.809178 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:49:26.809516 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:49:26.809623 systemd[1]: kubelet.service: Consumed 1.090s CPU time, 131.6M memory peak, 0B memory swap peak. Jul 6 23:49:26.824764 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:49:27.002767 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:49:27.008589 (kubelet)[2511]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:49:27.049904 kubelet[2511]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:49:27.049904 kubelet[2511]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:49:27.049904 kubelet[2511]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:49:27.050372 kubelet[2511]: I0706 23:49:27.049966 2511 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:49:27.056363 kubelet[2511]: I0706 23:49:27.056320 2511 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:49:27.056363 kubelet[2511]: I0706 23:49:27.056352 2511 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:49:27.056611 kubelet[2511]: I0706 23:49:27.056588 2511 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:49:27.058047 kubelet[2511]: I0706 23:49:27.058018 2511 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:49:27.060016 kubelet[2511]: I0706 23:49:27.059977 2511 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:49:27.065677 kubelet[2511]: E0706 23:49:27.065640 2511 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:49:27.065677 kubelet[2511]: I0706 23:49:27.065671 2511 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:49:27.070679 kubelet[2511]: I0706 23:49:27.070655 2511 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:49:27.070799 kubelet[2511]: I0706 23:49:27.070780 2511 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:49:27.070947 kubelet[2511]: I0706 23:49:27.070913 2511 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:49:27.071137 kubelet[2511]: I0706 23:49:27.070943 2511 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:49:27.071224 kubelet[2511]: I0706 23:49:27.071146 2511 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:49:27.071224 kubelet[2511]: I0706 23:49:27.071155 2511 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:49:27.071224 kubelet[2511]: I0706 23:49:27.071181 2511 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:49:27.071350 kubelet[2511]: I0706 23:49:27.071291 2511 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:49:27.071350 kubelet[2511]: I0706 23:49:27.071303 2511 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:49:27.071350 kubelet[2511]: I0706 23:49:27.071336 2511 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:49:27.071350 kubelet[2511]: I0706 23:49:27.071346 2511 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:49:27.071859 kubelet[2511]: I0706 23:49:27.071837 2511 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:49:27.073955 kubelet[2511]: I0706 23:49:27.072261 2511 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:49:27.073955 kubelet[2511]: I0706 23:49:27.072675 2511 server.go:1274] "Started kubelet" Jul 6 23:49:27.074157 kubelet[2511]: I0706 23:49:27.074136 2511 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:49:27.074220 kubelet[2511]: I0706 23:49:27.074165 2511 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:49:27.074835 kubelet[2511]: I0706 23:49:27.074777 2511 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:49:27.074835 kubelet[2511]: I0706 23:49:27.074830 2511 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:49:27.075770 kubelet[2511]: I0706 23:49:27.075746 2511 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:49:27.079353 kubelet[2511]: I0706 23:49:27.078447 2511 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:49:27.083696 kubelet[2511]: I0706 23:49:27.083652 2511 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:49:27.084457 kubelet[2511]: I0706 23:49:27.084163 2511 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:49:27.084457 kubelet[2511]: I0706 23:49:27.084179 2511 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:49:27.086983 kubelet[2511]: I0706 23:49:27.086463 2511 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:49:27.087126 kubelet[2511]: I0706 23:49:27.087025 2511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:49:27.087255 kubelet[2511]: I0706 23:49:27.087225 2511 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:49:27.087948 kubelet[2511]: E0706 23:49:27.087520 2511 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:49:27.088834 kubelet[2511]: I0706 23:49:27.088764 2511 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:49:27.088834 kubelet[2511]: I0706 23:49:27.088786 2511 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:49:27.088834 kubelet[2511]: I0706 23:49:27.088806 2511 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:49:27.088997 kubelet[2511]: E0706 23:49:27.088865 2511 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:49:27.089324 kubelet[2511]: I0706 23:49:27.089225 2511 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:49:27.119960 kubelet[2511]: I0706 23:49:27.119909 2511 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:49:27.119960 kubelet[2511]: I0706 23:49:27.119935 2511 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:49:27.119960 kubelet[2511]: I0706 23:49:27.119959 2511 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:49:27.120193 kubelet[2511]: I0706 23:49:27.120152 2511 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:49:27.120225 kubelet[2511]: I0706 23:49:27.120171 2511 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:49:27.120225 kubelet[2511]: I0706 23:49:27.120207 2511 policy_none.go:49] "None policy: Start" Jul 6 23:49:27.122427 kubelet[2511]: I0706 23:49:27.120841 2511 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:49:27.122427 kubelet[2511]: I0706 23:49:27.120873 2511 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:49:27.122427 kubelet[2511]: I0706 23:49:27.121085 2511 state_mem.go:75] "Updated machine memory state" Jul 6 23:49:27.126434 kubelet[2511]: I0706 23:49:27.126398 2511 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:49:27.126678 kubelet[2511]: I0706 23:49:27.126594 2511 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:49:27.126678 kubelet[2511]: I0706 23:49:27.126605 2511 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:49:27.127142 kubelet[2511]: I0706 23:49:27.126793 2511 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:49:27.234768 kubelet[2511]: I0706 23:49:27.234718 2511 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:49:27.239954 kubelet[2511]: I0706 23:49:27.239934 2511 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 6 23:49:27.240041 kubelet[2511]: I0706 23:49:27.240002 2511 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:49:27.386255 kubelet[2511]: I0706 23:49:27.386129 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:49:27.386255 kubelet[2511]: I0706 23:49:27.386161 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:49:27.386255 kubelet[2511]: I0706 23:49:27.386180 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3697458fb4359a2d45c33935cc5193a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3697458fb4359a2d45c33935cc5193a8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:49:27.386255 kubelet[2511]: I0706 23:49:27.386200 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3697458fb4359a2d45c33935cc5193a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3697458fb4359a2d45c33935cc5193a8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:49:27.386255 kubelet[2511]: I0706 23:49:27.386215 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3697458fb4359a2d45c33935cc5193a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3697458fb4359a2d45c33935cc5193a8\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:49:27.387036 kubelet[2511]: I0706 23:49:27.386234 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:49:27.387036 kubelet[2511]: I0706 23:49:27.386248 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:49:27.387036 kubelet[2511]: I0706 23:49:27.386265 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:49:27.387036 kubelet[2511]: I0706 23:49:27.386283 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:49:27.495504 kubelet[2511]: E0706 23:49:27.495457 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:27.497586 kubelet[2511]: E0706 23:49:27.497517 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:27.498677 kubelet[2511]: E0706 23:49:27.498645 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:28.072669 kubelet[2511]: I0706 23:49:28.072634 2511 apiserver.go:52] "Watching apiserver" Jul 6 23:49:28.085253 kubelet[2511]: I0706 23:49:28.085202 2511 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:49:28.100037 kubelet[2511]: E0706 23:49:28.099855 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:28.110096 kubelet[2511]: E0706 23:49:28.109329 2511 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 6 23:49:28.110349 kubelet[2511]: E0706 23:49:28.110230 2511 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:49:28.110383 kubelet[2511]: E0706 23:49:28.110369 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:28.110586 kubelet[2511]: E0706 23:49:28.110568 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:28.128137 kubelet[2511]: I0706 23:49:28.128033 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.128007976 podStartE2EDuration="1.128007976s" podCreationTimestamp="2025-07-06 23:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:28.127122046 +0000 UTC m=+1.114448325" watchObservedRunningTime="2025-07-06 23:49:28.128007976 +0000 UTC m=+1.115334255" Jul 6 23:49:28.128328 kubelet[2511]: I0706 23:49:28.128195 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.128188591 podStartE2EDuration="1.128188591s" podCreationTimestamp="2025-07-06 23:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:28.120472683 +0000 UTC m=+1.107798962" watchObservedRunningTime="2025-07-06 23:49:28.128188591 +0000 UTC m=+1.115514870" Jul 6 23:49:28.134830 kubelet[2511]: I0706 23:49:28.134757 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.134736481 podStartE2EDuration="1.134736481s" podCreationTimestamp="2025-07-06 23:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:28.134674683 +0000 UTC m=+1.122000962" watchObservedRunningTime="2025-07-06 23:49:28.134736481 +0000 UTC m=+1.122062760" Jul 6 23:49:29.101120 kubelet[2511]: E0706 23:49:29.101058 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:29.101521 kubelet[2511]: E0706 23:49:29.101224 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:30.103451 kubelet[2511]: E0706 23:49:30.103285 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:30.852994 kubelet[2511]: E0706 23:49:30.852951 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:32.559643 kubelet[2511]: I0706 23:49:32.559604 2511 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:49:32.560099 containerd[1469]: time="2025-07-06T23:49:32.559931082Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:49:32.560350 kubelet[2511]: I0706 23:49:32.560138 2511 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:49:33.253251 systemd[1]: Created slice kubepods-besteffort-pod8ff350e7_0b0e_4f3e_bcb2_d0cf2bc8d041.slice - libcontainer container kubepods-besteffort-pod8ff350e7_0b0e_4f3e_bcb2_d0cf2bc8d041.slice. Jul 6 23:49:33.263669 update_engine[1451]: I20250706 23:49:33.263617 1451 update_attempter.cc:509] Updating boot flags... Jul 6 23:49:33.292567 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2570) Jul 6 23:49:33.325216 kubelet[2511]: I0706 23:49:33.323294 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041-kube-proxy\") pod \"kube-proxy-lhctt\" (UID: \"8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041\") " pod="kube-system/kube-proxy-lhctt" Jul 6 23:49:33.325216 kubelet[2511]: I0706 23:49:33.323334 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf66n\" (UniqueName: \"kubernetes.io/projected/8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041-kube-api-access-hf66n\") pod \"kube-proxy-lhctt\" (UID: \"8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041\") " pod="kube-system/kube-proxy-lhctt" Jul 6 23:49:33.325216 kubelet[2511]: I0706 23:49:33.323354 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041-xtables-lock\") pod \"kube-proxy-lhctt\" (UID: \"8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041\") " pod="kube-system/kube-proxy-lhctt" Jul 6 23:49:33.325216 kubelet[2511]: I0706 23:49:33.323368 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041-lib-modules\") pod \"kube-proxy-lhctt\" (UID: \"8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041\") " pod="kube-system/kube-proxy-lhctt" Jul 6 23:49:33.333565 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2572) Jul 6 23:49:33.354165 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2572) Jul 6 23:49:33.429860 kubelet[2511]: E0706 23:49:33.429823 2511 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jul 6 23:49:33.429860 kubelet[2511]: E0706 23:49:33.429853 2511 projected.go:194] Error preparing data for projected volume kube-api-access-hf66n for pod kube-system/kube-proxy-lhctt: configmap "kube-root-ca.crt" not found Jul 6 23:49:33.430014 kubelet[2511]: E0706 23:49:33.429903 2511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041-kube-api-access-hf66n podName:8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041 nodeName:}" failed. No retries permitted until 2025-07-06 23:49:33.92988642 +0000 UTC m=+6.917212699 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hf66n" (UniqueName: "kubernetes.io/projected/8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041-kube-api-access-hf66n") pod "kube-proxy-lhctt" (UID: "8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041") : configmap "kube-root-ca.crt" not found Jul 6 23:49:34.059363 systemd[1]: Created slice kubepods-besteffort-pod0bfdc3c1_13a8_4e9f_b1a4_6e60945cccc0.slice - libcontainer container kubepods-besteffort-pod0bfdc3c1_13a8_4e9f_b1a4_6e60945cccc0.slice. Jul 6 23:49:34.130737 kubelet[2511]: I0706 23:49:34.130696 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0bfdc3c1-13a8-4e9f-b1a4-6e60945cccc0-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-vw529\" (UID: \"0bfdc3c1-13a8-4e9f-b1a4-6e60945cccc0\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-vw529" Jul 6 23:49:34.130737 kubelet[2511]: I0706 23:49:34.130741 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd82g\" (UniqueName: \"kubernetes.io/projected/0bfdc3c1-13a8-4e9f-b1a4-6e60945cccc0-kube-api-access-sd82g\") pod \"tigera-operator-5bf8dfcb4-vw529\" (UID: \"0bfdc3c1-13a8-4e9f-b1a4-6e60945cccc0\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-vw529" Jul 6 23:49:34.166847 kubelet[2511]: E0706 23:49:34.166815 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:34.167500 containerd[1469]: time="2025-07-06T23:49:34.167280179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lhctt,Uid:8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:34.191254 containerd[1469]: time="2025-07-06T23:49:34.191114473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:34.191254 containerd[1469]: time="2025-07-06T23:49:34.191191359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:34.191254 containerd[1469]: time="2025-07-06T23:49:34.191206047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:34.191430 containerd[1469]: time="2025-07-06T23:49:34.191311347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:34.211682 systemd[1]: Started cri-containerd-bc0174445d2a118c96aa7a71c681a3f7d6c2a5ca0db10ec21cce11f9873ddd62.scope - libcontainer container bc0174445d2a118c96aa7a71c681a3f7d6c2a5ca0db10ec21cce11f9873ddd62. Jul 6 23:49:34.237217 containerd[1469]: time="2025-07-06T23:49:34.237170954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lhctt,Uid:8ff350e7-0b0e-4f3e-bcb2-d0cf2bc8d041,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc0174445d2a118c96aa7a71c681a3f7d6c2a5ca0db10ec21cce11f9873ddd62\"" Jul 6 23:49:34.237911 kubelet[2511]: E0706 23:49:34.237865 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:34.239720 containerd[1469]: time="2025-07-06T23:49:34.239674003Z" level=info msg="CreateContainer within sandbox \"bc0174445d2a118c96aa7a71c681a3f7d6c2a5ca0db10ec21cce11f9873ddd62\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:49:34.255764 containerd[1469]: time="2025-07-06T23:49:34.255723138Z" level=info msg="CreateContainer within sandbox \"bc0174445d2a118c96aa7a71c681a3f7d6c2a5ca0db10ec21cce11f9873ddd62\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ceaf8246826c21c6e31ec0ced9a2cf6eef8439d9695eb0cfc0e2437346454392\"" Jul 6 23:49:34.256263 containerd[1469]: time="2025-07-06T23:49:34.256202689Z" level=info msg="StartContainer for \"ceaf8246826c21c6e31ec0ced9a2cf6eef8439d9695eb0cfc0e2437346454392\"" Jul 6 23:49:34.284673 systemd[1]: Started cri-containerd-ceaf8246826c21c6e31ec0ced9a2cf6eef8439d9695eb0cfc0e2437346454392.scope - libcontainer container ceaf8246826c21c6e31ec0ced9a2cf6eef8439d9695eb0cfc0e2437346454392. Jul 6 23:49:34.313765 containerd[1469]: time="2025-07-06T23:49:34.313268224Z" level=info msg="StartContainer for \"ceaf8246826c21c6e31ec0ced9a2cf6eef8439d9695eb0cfc0e2437346454392\" returns successfully" Jul 6 23:49:34.363438 containerd[1469]: time="2025-07-06T23:49:34.363393975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-vw529,Uid:0bfdc3c1-13a8-4e9f-b1a4-6e60945cccc0,Namespace:tigera-operator,Attempt:0,}" Jul 6 23:49:34.388180 containerd[1469]: time="2025-07-06T23:49:34.387726706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:34.388180 containerd[1469]: time="2025-07-06T23:49:34.387790747Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:34.388180 containerd[1469]: time="2025-07-06T23:49:34.387814592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:34.388180 containerd[1469]: time="2025-07-06T23:49:34.387966069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:34.407707 systemd[1]: Started cri-containerd-78a02ee79b1dc768cf778dd25285172196d6e2869ccdfacec36c51cb4b95704f.scope - libcontainer container 78a02ee79b1dc768cf778dd25285172196d6e2869ccdfacec36c51cb4b95704f. Jul 6 23:49:34.446835 containerd[1469]: time="2025-07-06T23:49:34.446702264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-vw529,Uid:0bfdc3c1-13a8-4e9f-b1a4-6e60945cccc0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"78a02ee79b1dc768cf778dd25285172196d6e2869ccdfacec36c51cb4b95704f\"" Jul 6 23:49:34.449076 containerd[1469]: time="2025-07-06T23:49:34.449026955Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 6 23:49:34.700875 kubelet[2511]: E0706 23:49:34.700731 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:35.112677 kubelet[2511]: E0706 23:49:35.112607 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:35.113234 kubelet[2511]: E0706 23:49:35.113216 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:35.130869 kubelet[2511]: I0706 23:49:35.130810 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lhctt" podStartSLOduration=2.130791464 podStartE2EDuration="2.130791464s" podCreationTimestamp="2025-07-06 23:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:49:35.122170261 +0000 UTC m=+8.109496540" watchObservedRunningTime="2025-07-06 23:49:35.130791464 +0000 UTC m=+8.118117743" Jul 6 23:49:36.178625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687288318.mount: Deactivated successfully. Jul 6 23:49:37.177345 containerd[1469]: time="2025-07-06T23:49:37.177288136Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:37.177995 containerd[1469]: time="2025-07-06T23:49:37.177926795Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 6 23:49:37.178987 containerd[1469]: time="2025-07-06T23:49:37.178953980Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:37.181235 containerd[1469]: time="2025-07-06T23:49:37.181189332Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:37.181976 containerd[1469]: time="2025-07-06T23:49:37.181926307Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.732847735s" Jul 6 23:49:37.181976 containerd[1469]: time="2025-07-06T23:49:37.181968828Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 6 23:49:37.183737 containerd[1469]: time="2025-07-06T23:49:37.183668806Z" level=info msg="CreateContainer within sandbox \"78a02ee79b1dc768cf778dd25285172196d6e2869ccdfacec36c51cb4b95704f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 6 23:49:37.195649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2182505165.mount: Deactivated successfully. Jul 6 23:49:37.196567 containerd[1469]: time="2025-07-06T23:49:37.196519614Z" level=info msg="CreateContainer within sandbox \"78a02ee79b1dc768cf778dd25285172196d6e2869ccdfacec36c51cb4b95704f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"406c6335b01bfd35e9b586c2f77f36d09bc9a3e08112379f86d425797e669d86\"" Jul 6 23:49:37.196908 containerd[1469]: time="2025-07-06T23:49:37.196879636Z" level=info msg="StartContainer for \"406c6335b01bfd35e9b586c2f77f36d09bc9a3e08112379f86d425797e669d86\"" Jul 6 23:49:37.226692 systemd[1]: Started cri-containerd-406c6335b01bfd35e9b586c2f77f36d09bc9a3e08112379f86d425797e669d86.scope - libcontainer container 406c6335b01bfd35e9b586c2f77f36d09bc9a3e08112379f86d425797e669d86. Jul 6 23:49:37.255115 containerd[1469]: time="2025-07-06T23:49:37.255061655Z" level=info msg="StartContainer for \"406c6335b01bfd35e9b586c2f77f36d09bc9a3e08112379f86d425797e669d86\" returns successfully" Jul 6 23:49:38.126569 kubelet[2511]: I0706 23:49:38.126479 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-vw529" podStartSLOduration=2.392470204 podStartE2EDuration="5.126462758s" podCreationTimestamp="2025-07-06 23:49:33 +0000 UTC" firstStartedPulling="2025-07-06 23:49:34.448484426 +0000 UTC m=+7.435810705" lastFinishedPulling="2025-07-06 23:49:37.18247698 +0000 UTC m=+10.169803259" observedRunningTime="2025-07-06 23:49:38.126118648 +0000 UTC m=+11.113444927" watchObservedRunningTime="2025-07-06 23:49:38.126462758 +0000 UTC m=+11.113789037" Jul 6 23:49:38.587618 kubelet[2511]: E0706 23:49:38.587560 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:40.858379 kubelet[2511]: E0706 23:49:40.858324 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:43.082985 sudo[1650]: pam_unix(sudo:session): session closed for user root Jul 6 23:49:43.085314 sshd[1647]: pam_unix(sshd:session): session closed for user core Jul 6 23:49:43.093342 systemd[1]: sshd@6-10.0.0.53:22-10.0.0.1:42792.service: Deactivated successfully. Jul 6 23:49:43.098048 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:49:43.099091 systemd[1]: session-7.scope: Consumed 5.266s CPU time, 158.8M memory peak, 0B memory swap peak. Jul 6 23:49:43.100848 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:49:43.102051 systemd-logind[1450]: Removed session 7. Jul 6 23:49:47.866513 systemd[1]: Created slice kubepods-besteffort-podab541f79_8920_4105_9614_0cec02b23d32.slice - libcontainer container kubepods-besteffort-podab541f79_8920_4105_9614_0cec02b23d32.slice. Jul 6 23:49:47.917710 kubelet[2511]: I0706 23:49:47.917598 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ab541f79-8920-4105-9614-0cec02b23d32-typha-certs\") pod \"calico-typha-6bb7d5c888-n9x6p\" (UID: \"ab541f79-8920-4105-9614-0cec02b23d32\") " pod="calico-system/calico-typha-6bb7d5c888-n9x6p" Jul 6 23:49:47.917710 kubelet[2511]: I0706 23:49:47.917672 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab541f79-8920-4105-9614-0cec02b23d32-tigera-ca-bundle\") pod \"calico-typha-6bb7d5c888-n9x6p\" (UID: \"ab541f79-8920-4105-9614-0cec02b23d32\") " pod="calico-system/calico-typha-6bb7d5c888-n9x6p" Jul 6 23:49:47.917710 kubelet[2511]: I0706 23:49:47.917711 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flncc\" (UniqueName: \"kubernetes.io/projected/ab541f79-8920-4105-9614-0cec02b23d32-kube-api-access-flncc\") pod \"calico-typha-6bb7d5c888-n9x6p\" (UID: \"ab541f79-8920-4105-9614-0cec02b23d32\") " pod="calico-system/calico-typha-6bb7d5c888-n9x6p" Jul 6 23:49:48.180109 kubelet[2511]: E0706 23:49:48.179890 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:48.181647 containerd[1469]: time="2025-07-06T23:49:48.181577075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb7d5c888-n9x6p,Uid:ab541f79-8920-4105-9614-0cec02b23d32,Namespace:calico-system,Attempt:0,}" Jul 6 23:49:48.223390 containerd[1469]: time="2025-07-06T23:49:48.223105912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:48.223390 containerd[1469]: time="2025-07-06T23:49:48.223244704Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:48.223390 containerd[1469]: time="2025-07-06T23:49:48.223275983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:48.223736 containerd[1469]: time="2025-07-06T23:49:48.223482532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:48.279793 systemd[1]: Started cri-containerd-18a12e838ac8411efea10c61da1264eb3d352578236dd96d8cabc56dc67d2ab0.scope - libcontainer container 18a12e838ac8411efea10c61da1264eb3d352578236dd96d8cabc56dc67d2ab0. Jul 6 23:49:48.303340 systemd[1]: Created slice kubepods-besteffort-pod774676ed_62c2_43d7_8f3a_8c252dc7fdb2.slice - libcontainer container kubepods-besteffort-pod774676ed_62c2_43d7_8f3a_8c252dc7fdb2.slice. Jul 6 23:49:48.320619 kubelet[2511]: I0706 23:49:48.320530 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-cni-bin-dir\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.320619 kubelet[2511]: I0706 23:49:48.320625 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-lib-modules\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.320781 kubelet[2511]: I0706 23:49:48.320656 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-var-lib-calico\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.320781 kubelet[2511]: I0706 23:49:48.320685 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-xtables-lock\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.320781 kubelet[2511]: I0706 23:49:48.320699 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-cni-log-dir\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.320781 kubelet[2511]: I0706 23:49:48.320717 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-tigera-ca-bundle\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.320781 kubelet[2511]: I0706 23:49:48.320734 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlbhw\" (UniqueName: \"kubernetes.io/projected/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-kube-api-access-wlbhw\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.320940 kubelet[2511]: I0706 23:49:48.320754 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-flexvol-driver-host\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.320940 kubelet[2511]: I0706 23:49:48.320767 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-node-certs\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.320940 kubelet[2511]: I0706 23:49:48.320779 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-policysync\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.320940 kubelet[2511]: I0706 23:49:48.320793 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-cni-net-dir\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.320940 kubelet[2511]: I0706 23:49:48.320809 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/774676ed-62c2-43d7-8f3a-8c252dc7fdb2-var-run-calico\") pod \"calico-node-9f7tx\" (UID: \"774676ed-62c2-43d7-8f3a-8c252dc7fdb2\") " pod="calico-system/calico-node-9f7tx" Jul 6 23:49:48.349016 containerd[1469]: time="2025-07-06T23:49:48.348961777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6bb7d5c888-n9x6p,Uid:ab541f79-8920-4105-9614-0cec02b23d32,Namespace:calico-system,Attempt:0,} returns sandbox id \"18a12e838ac8411efea10c61da1264eb3d352578236dd96d8cabc56dc67d2ab0\"" Jul 6 23:49:48.350320 kubelet[2511]: E0706 23:49:48.349994 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:48.351345 containerd[1469]: time="2025-07-06T23:49:48.351307897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 6 23:49:48.438465 kubelet[2511]: E0706 23:49:48.436670 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.438465 kubelet[2511]: W0706 23:49:48.436705 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.438465 kubelet[2511]: E0706 23:49:48.436727 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.438465 kubelet[2511]: E0706 23:49:48.437502 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.438465 kubelet[2511]: W0706 23:49:48.437560 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.438465 kubelet[2511]: E0706 23:49:48.437606 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.450603 kubelet[2511]: E0706 23:49:48.450558 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.450603 kubelet[2511]: W0706 23:49:48.450586 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.450765 kubelet[2511]: E0706 23:49:48.450618 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.472516 kubelet[2511]: E0706 23:49:48.470799 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dkdw8" podUID="e9ac6c9c-1856-41b6-91f1-74ff39eba111" Jul 6 23:49:48.519589 kubelet[2511]: E0706 23:49:48.517841 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.519589 kubelet[2511]: W0706 23:49:48.517904 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.519589 kubelet[2511]: E0706 23:49:48.517946 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.519589 kubelet[2511]: E0706 23:49:48.518518 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.519589 kubelet[2511]: W0706 23:49:48.518529 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.519589 kubelet[2511]: E0706 23:49:48.519594 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.520183 kubelet[2511]: E0706 23:49:48.520141 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.520183 kubelet[2511]: W0706 23:49:48.520159 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.520183 kubelet[2511]: E0706 23:49:48.520169 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.520921 kubelet[2511]: E0706 23:49:48.520888 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.520921 kubelet[2511]: W0706 23:49:48.520909 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.520921 kubelet[2511]: E0706 23:49:48.520919 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.521690 kubelet[2511]: E0706 23:49:48.521652 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.521690 kubelet[2511]: W0706 23:49:48.521688 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.521784 kubelet[2511]: E0706 23:49:48.521700 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.523882 kubelet[2511]: E0706 23:49:48.523846 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.523882 kubelet[2511]: W0706 23:49:48.523865 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.523882 kubelet[2511]: E0706 23:49:48.523884 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.524737 kubelet[2511]: E0706 23:49:48.524710 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.524737 kubelet[2511]: W0706 23:49:48.524727 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.524737 kubelet[2511]: E0706 23:49:48.524738 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.526566 kubelet[2511]: E0706 23:49:48.525037 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.526566 kubelet[2511]: W0706 23:49:48.525053 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.526566 kubelet[2511]: E0706 23:49:48.525065 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.526566 kubelet[2511]: E0706 23:49:48.525720 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.526566 kubelet[2511]: W0706 23:49:48.525730 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.526750 kubelet[2511]: E0706 23:49:48.526578 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.528735 kubelet[2511]: E0706 23:49:48.528706 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.528735 kubelet[2511]: W0706 23:49:48.528724 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.528735 kubelet[2511]: E0706 23:49:48.528734 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.529616 kubelet[2511]: E0706 23:49:48.529591 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.529616 kubelet[2511]: W0706 23:49:48.529608 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.529616 kubelet[2511]: E0706 23:49:48.529617 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.529878 kubelet[2511]: E0706 23:49:48.529847 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.529878 kubelet[2511]: W0706 23:49:48.529862 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.529878 kubelet[2511]: E0706 23:49:48.529880 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.530633 kubelet[2511]: E0706 23:49:48.530607 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.530633 kubelet[2511]: W0706 23:49:48.530626 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.530633 kubelet[2511]: E0706 23:49:48.530635 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.531615 kubelet[2511]: E0706 23:49:48.531590 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.531615 kubelet[2511]: W0706 23:49:48.531609 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.531683 kubelet[2511]: E0706 23:49:48.531619 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.531905 kubelet[2511]: E0706 23:49:48.531877 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.531905 kubelet[2511]: W0706 23:49:48.531895 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.531905 kubelet[2511]: E0706 23:49:48.531904 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.533934 kubelet[2511]: E0706 23:49:48.533907 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.533934 kubelet[2511]: W0706 23:49:48.533926 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.534002 kubelet[2511]: E0706 23:49:48.533937 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.535999 kubelet[2511]: E0706 23:49:48.535974 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.535999 kubelet[2511]: W0706 23:49:48.535992 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.536064 kubelet[2511]: E0706 23:49:48.536003 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.536269 kubelet[2511]: E0706 23:49:48.536247 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.536269 kubelet[2511]: W0706 23:49:48.536262 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.536331 kubelet[2511]: E0706 23:49:48.536271 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.537714 kubelet[2511]: E0706 23:49:48.537580 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.537714 kubelet[2511]: W0706 23:49:48.537598 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.537714 kubelet[2511]: E0706 23:49:48.537611 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.539561 kubelet[2511]: E0706 23:49:48.537931 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.539561 kubelet[2511]: W0706 23:49:48.537945 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.539561 kubelet[2511]: E0706 23:49:48.537955 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.539840 kubelet[2511]: E0706 23:49:48.539811 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.539840 kubelet[2511]: W0706 23:49:48.539834 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.539914 kubelet[2511]: E0706 23:49:48.539850 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.539914 kubelet[2511]: I0706 23:49:48.539887 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e9ac6c9c-1856-41b6-91f1-74ff39eba111-varrun\") pod \"csi-node-driver-dkdw8\" (UID: \"e9ac6c9c-1856-41b6-91f1-74ff39eba111\") " pod="calico-system/csi-node-driver-dkdw8" Jul 6 23:49:48.540188 kubelet[2511]: E0706 23:49:48.540162 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.540188 kubelet[2511]: W0706 23:49:48.540180 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.540252 kubelet[2511]: E0706 23:49:48.540206 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.540252 kubelet[2511]: I0706 23:49:48.540228 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e9ac6c9c-1856-41b6-91f1-74ff39eba111-socket-dir\") pod \"csi-node-driver-dkdw8\" (UID: \"e9ac6c9c-1856-41b6-91f1-74ff39eba111\") " pod="calico-system/csi-node-driver-dkdw8" Jul 6 23:49:48.540597 kubelet[2511]: E0706 23:49:48.540571 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.540597 kubelet[2511]: W0706 23:49:48.540588 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.540670 kubelet[2511]: E0706 23:49:48.540612 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.541313 kubelet[2511]: E0706 23:49:48.541287 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.541313 kubelet[2511]: W0706 23:49:48.541305 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.541375 kubelet[2511]: E0706 23:49:48.541328 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.543765 kubelet[2511]: E0706 23:49:48.543737 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.543765 kubelet[2511]: W0706 23:49:48.543755 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.543884 kubelet[2511]: E0706 23:49:48.543778 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.543884 kubelet[2511]: I0706 23:49:48.543795 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e9ac6c9c-1856-41b6-91f1-74ff39eba111-registration-dir\") pod \"csi-node-driver-dkdw8\" (UID: \"e9ac6c9c-1856-41b6-91f1-74ff39eba111\") " pod="calico-system/csi-node-driver-dkdw8" Jul 6 23:49:48.544107 kubelet[2511]: E0706 23:49:48.544073 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.544107 kubelet[2511]: W0706 23:49:48.544091 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.544184 kubelet[2511]: E0706 23:49:48.544145 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.544241 kubelet[2511]: I0706 23:49:48.544184 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jphbc\" (UniqueName: \"kubernetes.io/projected/e9ac6c9c-1856-41b6-91f1-74ff39eba111-kube-api-access-jphbc\") pod \"csi-node-driver-dkdw8\" (UID: \"e9ac6c9c-1856-41b6-91f1-74ff39eba111\") " pod="calico-system/csi-node-driver-dkdw8" Jul 6 23:49:48.545636 kubelet[2511]: E0706 23:49:48.545610 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.545636 kubelet[2511]: W0706 23:49:48.545627 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.545744 kubelet[2511]: E0706 23:49:48.545722 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.545885 kubelet[2511]: E0706 23:49:48.545860 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.545885 kubelet[2511]: W0706 23:49:48.545881 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.546021 kubelet[2511]: E0706 23:49:48.545986 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.546945 kubelet[2511]: E0706 23:49:48.546902 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.546945 kubelet[2511]: W0706 23:49:48.546932 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.547230 kubelet[2511]: E0706 23:49:48.547084 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.547230 kubelet[2511]: I0706 23:49:48.547106 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e9ac6c9c-1856-41b6-91f1-74ff39eba111-kubelet-dir\") pod \"csi-node-driver-dkdw8\" (UID: \"e9ac6c9c-1856-41b6-91f1-74ff39eba111\") " pod="calico-system/csi-node-driver-dkdw8" Jul 6 23:49:48.549557 kubelet[2511]: E0706 23:49:48.548601 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.549557 kubelet[2511]: W0706 23:49:48.548616 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.549557 kubelet[2511]: E0706 23:49:48.548745 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.549806 kubelet[2511]: E0706 23:49:48.549775 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.549806 kubelet[2511]: W0706 23:49:48.549793 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.549806 kubelet[2511]: E0706 23:49:48.549803 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.550121 kubelet[2511]: E0706 23:49:48.550095 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.550121 kubelet[2511]: W0706 23:49:48.550113 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.550201 kubelet[2511]: E0706 23:49:48.550134 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.550833 kubelet[2511]: E0706 23:49:48.550809 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.550833 kubelet[2511]: W0706 23:49:48.550826 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.550910 kubelet[2511]: E0706 23:49:48.550836 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.555696 kubelet[2511]: E0706 23:49:48.555652 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.555696 kubelet[2511]: W0706 23:49:48.555681 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.555696 kubelet[2511]: E0706 23:49:48.555696 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.556192 kubelet[2511]: E0706 23:49:48.556167 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.556192 kubelet[2511]: W0706 23:49:48.556187 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.556268 kubelet[2511]: E0706 23:49:48.556199 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.608784 containerd[1469]: time="2025-07-06T23:49:48.608727257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9f7tx,Uid:774676ed-62c2-43d7-8f3a-8c252dc7fdb2,Namespace:calico-system,Attempt:0,}" Jul 6 23:49:48.644016 containerd[1469]: time="2025-07-06T23:49:48.643320979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:49:48.644016 containerd[1469]: time="2025-07-06T23:49:48.643440785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:49:48.644016 containerd[1469]: time="2025-07-06T23:49:48.643501388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:48.644016 containerd[1469]: time="2025-07-06T23:49:48.643721964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:49:48.652130 kubelet[2511]: E0706 23:49:48.651980 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.652130 kubelet[2511]: W0706 23:49:48.652011 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.652130 kubelet[2511]: E0706 23:49:48.652031 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.652731 kubelet[2511]: E0706 23:49:48.652639 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.652731 kubelet[2511]: W0706 23:49:48.652653 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.652731 kubelet[2511]: E0706 23:49:48.652675 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.653225 kubelet[2511]: E0706 23:49:48.653137 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.653225 kubelet[2511]: W0706 23:49:48.653148 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.653225 kubelet[2511]: E0706 23:49:48.653171 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.653694 kubelet[2511]: E0706 23:49:48.653570 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.653694 kubelet[2511]: W0706 23:49:48.653582 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.653694 kubelet[2511]: E0706 23:49:48.653595 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.653997 kubelet[2511]: E0706 23:49:48.653948 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.654034 kubelet[2511]: W0706 23:49:48.654009 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.654210 kubelet[2511]: E0706 23:49:48.654137 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.655587 kubelet[2511]: E0706 23:49:48.654402 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.655587 kubelet[2511]: W0706 23:49:48.654423 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.655587 kubelet[2511]: E0706 23:49:48.654678 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.655587 kubelet[2511]: E0706 23:49:48.654800 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.655587 kubelet[2511]: W0706 23:49:48.654811 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.655587 kubelet[2511]: E0706 23:49:48.654919 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.655587 kubelet[2511]: E0706 23:49:48.655239 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.655587 kubelet[2511]: W0706 23:49:48.655250 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.655587 kubelet[2511]: E0706 23:49:48.655333 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.655587 kubelet[2511]: E0706 23:49:48.655612 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.655968 kubelet[2511]: W0706 23:49:48.655624 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.655968 kubelet[2511]: E0706 23:49:48.655713 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.656012 kubelet[2511]: E0706 23:49:48.655977 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.656012 kubelet[2511]: W0706 23:49:48.655987 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.656461 kubelet[2511]: E0706 23:49:48.656072 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.656461 kubelet[2511]: E0706 23:49:48.656305 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.656461 kubelet[2511]: W0706 23:49:48.656315 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.659067 kubelet[2511]: E0706 23:49:48.656413 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.659067 kubelet[2511]: E0706 23:49:48.656625 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.659067 kubelet[2511]: W0706 23:49:48.657235 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.659067 kubelet[2511]: E0706 23:49:48.657338 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.659067 kubelet[2511]: E0706 23:49:48.657575 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.659067 kubelet[2511]: W0706 23:49:48.657607 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.659067 kubelet[2511]: E0706 23:49:48.657710 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.659067 kubelet[2511]: E0706 23:49:48.658052 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.659067 kubelet[2511]: W0706 23:49:48.658065 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.659067 kubelet[2511]: E0706 23:49:48.658175 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.659479 kubelet[2511]: E0706 23:49:48.658348 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.659479 kubelet[2511]: W0706 23:49:48.658357 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.659479 kubelet[2511]: E0706 23:49:48.658466 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.659479 kubelet[2511]: E0706 23:49:48.658665 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.659479 kubelet[2511]: W0706 23:49:48.658689 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.659479 kubelet[2511]: E0706 23:49:48.658772 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.659479 kubelet[2511]: E0706 23:49:48.659017 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.659479 kubelet[2511]: W0706 23:49:48.659028 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.659479 kubelet[2511]: E0706 23:49:48.659130 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.659479 kubelet[2511]: E0706 23:49:48.659338 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.659830 kubelet[2511]: W0706 23:49:48.659348 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.659830 kubelet[2511]: E0706 23:49:48.659396 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.659830 kubelet[2511]: E0706 23:49:48.659756 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.659830 kubelet[2511]: W0706 23:49:48.659768 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.659935 kubelet[2511]: E0706 23:49:48.659859 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.660169 kubelet[2511]: E0706 23:49:48.660147 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.660169 kubelet[2511]: W0706 23:49:48.660165 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.660277 kubelet[2511]: E0706 23:49:48.660257 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.660640 kubelet[2511]: E0706 23:49:48.660616 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.660640 kubelet[2511]: W0706 23:49:48.660634 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.660960 kubelet[2511]: E0706 23:49:48.660797 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.661933 kubelet[2511]: E0706 23:49:48.661381 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.661933 kubelet[2511]: W0706 23:49:48.661398 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.661933 kubelet[2511]: E0706 23:49:48.661727 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.662082 kubelet[2511]: E0706 23:49:48.661963 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.662082 kubelet[2511]: W0706 23:49:48.661975 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.662082 kubelet[2511]: E0706 23:49:48.662011 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.662643 kubelet[2511]: E0706 23:49:48.662625 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.662700 kubelet[2511]: W0706 23:49:48.662639 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.662739 kubelet[2511]: E0706 23:49:48.662703 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.663240 kubelet[2511]: E0706 23:49:48.663218 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.663295 kubelet[2511]: W0706 23:49:48.663271 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.663295 kubelet[2511]: E0706 23:49:48.663286 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.665784 systemd[1]: Started cri-containerd-db4be5c5b80494ac14e2d18727c4a74250c86469cd50e5b49d090e2fc0915fda.scope - libcontainer container db4be5c5b80494ac14e2d18727c4a74250c86469cd50e5b49d090e2fc0915fda. Jul 6 23:49:48.676000 kubelet[2511]: E0706 23:49:48.675859 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:48.676000 kubelet[2511]: W0706 23:49:48.675908 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:48.676000 kubelet[2511]: E0706 23:49:48.675934 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:48.704642 containerd[1469]: time="2025-07-06T23:49:48.703515180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9f7tx,Uid:774676ed-62c2-43d7-8f3a-8c252dc7fdb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"db4be5c5b80494ac14e2d18727c4a74250c86469cd50e5b49d090e2fc0915fda\"" Jul 6 23:49:49.768022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236790912.mount: Deactivated successfully. Jul 6 23:49:50.089502 kubelet[2511]: E0706 23:49:50.089390 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dkdw8" podUID="e9ac6c9c-1856-41b6-91f1-74ff39eba111" Jul 6 23:49:50.360705 containerd[1469]: time="2025-07-06T23:49:50.360569310Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:50.361755 containerd[1469]: time="2025-07-06T23:49:50.361725677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 6 23:49:50.364251 containerd[1469]: time="2025-07-06T23:49:50.364225484Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:50.366825 containerd[1469]: time="2025-07-06T23:49:50.366772349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:50.367907 containerd[1469]: time="2025-07-06T23:49:50.367855849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.016509581s" Jul 6 23:49:50.368057 containerd[1469]: time="2025-07-06T23:49:50.367910322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 6 23:49:50.368902 containerd[1469]: time="2025-07-06T23:49:50.368869879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 6 23:49:50.399451 containerd[1469]: time="2025-07-06T23:49:50.399387489Z" level=info msg="CreateContainer within sandbox \"18a12e838ac8411efea10c61da1264eb3d352578236dd96d8cabc56dc67d2ab0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 6 23:49:50.416273 containerd[1469]: time="2025-07-06T23:49:50.416125274Z" level=info msg="CreateContainer within sandbox \"18a12e838ac8411efea10c61da1264eb3d352578236dd96d8cabc56dc67d2ab0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"300ffd40881584f59f37e02505f6318bd7b9c76d8a0983d597a84223002a5399\"" Jul 6 23:49:50.422517 containerd[1469]: time="2025-07-06T23:49:50.421818773Z" level=info msg="StartContainer for \"300ffd40881584f59f37e02505f6318bd7b9c76d8a0983d597a84223002a5399\"" Jul 6 23:49:50.466750 systemd[1]: Started cri-containerd-300ffd40881584f59f37e02505f6318bd7b9c76d8a0983d597a84223002a5399.scope - libcontainer container 300ffd40881584f59f37e02505f6318bd7b9c76d8a0983d597a84223002a5399. Jul 6 23:49:50.519354 containerd[1469]: time="2025-07-06T23:49:50.519295858Z" level=info msg="StartContainer for \"300ffd40881584f59f37e02505f6318bd7b9c76d8a0983d597a84223002a5399\" returns successfully" Jul 6 23:49:51.152203 kubelet[2511]: E0706 23:49:51.152161 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:51.153098 kubelet[2511]: E0706 23:49:51.153061 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.153098 kubelet[2511]: W0706 23:49:51.153080 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.153098 kubelet[2511]: E0706 23:49:51.153099 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.153517 kubelet[2511]: E0706 23:49:51.153463 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.153517 kubelet[2511]: W0706 23:49:51.153490 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.153517 kubelet[2511]: E0706 23:49:51.153500 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.153808 kubelet[2511]: E0706 23:49:51.153787 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.153808 kubelet[2511]: W0706 23:49:51.153802 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.153873 kubelet[2511]: E0706 23:49:51.153814 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.154308 kubelet[2511]: E0706 23:49:51.154176 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.154308 kubelet[2511]: W0706 23:49:51.154195 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.154308 kubelet[2511]: E0706 23:49:51.154211 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.154489 kubelet[2511]: E0706 23:49:51.154462 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.154489 kubelet[2511]: W0706 23:49:51.154478 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.154489 kubelet[2511]: E0706 23:49:51.154488 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.154729 kubelet[2511]: E0706 23:49:51.154712 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.154729 kubelet[2511]: W0706 23:49:51.154724 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.154851 kubelet[2511]: E0706 23:49:51.154734 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.155648 kubelet[2511]: E0706 23:49:51.155624 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.155648 kubelet[2511]: W0706 23:49:51.155642 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.155802 kubelet[2511]: E0706 23:49:51.155655 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.155983 kubelet[2511]: E0706 23:49:51.155923 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.155983 kubelet[2511]: W0706 23:49:51.155940 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.155983 kubelet[2511]: E0706 23:49:51.155950 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.157300 kubelet[2511]: E0706 23:49:51.156248 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.157300 kubelet[2511]: W0706 23:49:51.156257 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.157300 kubelet[2511]: E0706 23:49:51.156265 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.157300 kubelet[2511]: E0706 23:49:51.156516 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.157300 kubelet[2511]: W0706 23:49:51.156550 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.157300 kubelet[2511]: E0706 23:49:51.156576 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.157300 kubelet[2511]: E0706 23:49:51.156882 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.157300 kubelet[2511]: W0706 23:49:51.156894 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.157300 kubelet[2511]: E0706 23:49:51.156904 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.157300 kubelet[2511]: E0706 23:49:51.157245 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.157625 kubelet[2511]: W0706 23:49:51.157254 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.157625 kubelet[2511]: E0706 23:49:51.157264 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.158316 kubelet[2511]: E0706 23:49:51.157995 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.158316 kubelet[2511]: W0706 23:49:51.158011 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.158316 kubelet[2511]: E0706 23:49:51.158024 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.158316 kubelet[2511]: E0706 23:49:51.158240 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.158316 kubelet[2511]: W0706 23:49:51.158247 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.158316 kubelet[2511]: E0706 23:49:51.158255 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.158678 kubelet[2511]: E0706 23:49:51.158480 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.158678 kubelet[2511]: W0706 23:49:51.158500 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.158678 kubelet[2511]: E0706 23:49:51.158554 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.170197 kubelet[2511]: E0706 23:49:51.170139 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.170197 kubelet[2511]: W0706 23:49:51.170164 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.170197 kubelet[2511]: E0706 23:49:51.170187 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.170487 kubelet[2511]: E0706 23:49:51.170467 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.170487 kubelet[2511]: W0706 23:49:51.170482 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.170580 kubelet[2511]: E0706 23:49:51.170502 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.170940 kubelet[2511]: E0706 23:49:51.170898 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.170940 kubelet[2511]: W0706 23:49:51.170922 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.171077 kubelet[2511]: E0706 23:49:51.170944 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.171401 kubelet[2511]: E0706 23:49:51.171326 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.171401 kubelet[2511]: W0706 23:49:51.171343 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.171401 kubelet[2511]: E0706 23:49:51.171363 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.171843 kubelet[2511]: E0706 23:49:51.171789 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.171843 kubelet[2511]: W0706 23:49:51.171804 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.171843 kubelet[2511]: E0706 23:49:51.171824 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.172263 kubelet[2511]: E0706 23:49:51.172108 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.172263 kubelet[2511]: W0706 23:49:51.172144 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.173958 kubelet[2511]: E0706 23:49:51.172739 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.173958 kubelet[2511]: W0706 23:49:51.172800 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.173958 kubelet[2511]: E0706 23:49:51.172951 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.173958 kubelet[2511]: E0706 23:49:51.173587 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.173958 kubelet[2511]: W0706 23:49:51.173599 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.173958 kubelet[2511]: E0706 23:49:51.173612 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.173958 kubelet[2511]: E0706 23:49:51.173747 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.173958 kubelet[2511]: E0706 23:49:51.173814 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.173958 kubelet[2511]: W0706 23:49:51.173834 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.173958 kubelet[2511]: E0706 23:49:51.173844 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.174311 kubelet[2511]: E0706 23:49:51.174043 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.174311 kubelet[2511]: W0706 23:49:51.174051 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.174311 kubelet[2511]: E0706 23:49:51.174060 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.174466 kubelet[2511]: E0706 23:49:51.174429 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.174466 kubelet[2511]: W0706 23:49:51.174454 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.174624 kubelet[2511]: E0706 23:49:51.174478 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.175064 kubelet[2511]: E0706 23:49:51.175024 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.175064 kubelet[2511]: W0706 23:49:51.175039 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.175164 kubelet[2511]: E0706 23:49:51.175077 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.175600 kubelet[2511]: E0706 23:49:51.175573 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.175600 kubelet[2511]: W0706 23:49:51.175588 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.175700 kubelet[2511]: E0706 23:49:51.175626 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.175958 kubelet[2511]: E0706 23:49:51.175936 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.175958 kubelet[2511]: W0706 23:49:51.175949 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.176084 kubelet[2511]: E0706 23:49:51.175982 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.176791 kubelet[2511]: E0706 23:49:51.176770 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.176791 kubelet[2511]: W0706 23:49:51.176786 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.176906 kubelet[2511]: E0706 23:49:51.176873 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.177132 kubelet[2511]: E0706 23:49:51.177113 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.177132 kubelet[2511]: W0706 23:49:51.177126 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.177225 kubelet[2511]: E0706 23:49:51.177182 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.177640 kubelet[2511]: E0706 23:49:51.177612 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.177640 kubelet[2511]: W0706 23:49:51.177628 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.177640 kubelet[2511]: E0706 23:49:51.177644 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.178364 kubelet[2511]: I0706 23:49:51.178312 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6bb7d5c888-n9x6p" podStartSLOduration=2.160402142 podStartE2EDuration="4.178291271s" podCreationTimestamp="2025-07-06 23:49:47 +0000 UTC" firstStartedPulling="2025-07-06 23:49:48.350786464 +0000 UTC m=+21.338112743" lastFinishedPulling="2025-07-06 23:49:50.368675572 +0000 UTC m=+23.356001872" observedRunningTime="2025-07-06 23:49:51.17458852 +0000 UTC m=+24.161914799" watchObservedRunningTime="2025-07-06 23:49:51.178291271 +0000 UTC m=+24.165617550" Jul 6 23:49:51.179064 kubelet[2511]: E0706 23:49:51.178962 2511 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 6 23:49:51.179064 kubelet[2511]: W0706 23:49:51.178974 2511 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 6 23:49:51.179064 kubelet[2511]: E0706 23:49:51.178984 2511 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 6 23:49:51.699827 containerd[1469]: time="2025-07-06T23:49:51.699752137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:51.700682 containerd[1469]: time="2025-07-06T23:49:51.700621624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 6 23:49:51.701829 containerd[1469]: time="2025-07-06T23:49:51.701764004Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:51.705896 containerd[1469]: time="2025-07-06T23:49:51.705856447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:51.707125 containerd[1469]: time="2025-07-06T23:49:51.707079080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.338175426s" Jul 6 23:49:51.707125 containerd[1469]: time="2025-07-06T23:49:51.707118674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 6 23:49:51.711429 containerd[1469]: time="2025-07-06T23:49:51.711394131Z" level=info msg="CreateContainer within sandbox \"db4be5c5b80494ac14e2d18727c4a74250c86469cd50e5b49d090e2fc0915fda\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 6 23:49:51.725087 containerd[1469]: time="2025-07-06T23:49:51.725031522Z" level=info msg="CreateContainer within sandbox \"db4be5c5b80494ac14e2d18727c4a74250c86469cd50e5b49d090e2fc0915fda\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"33c1160b2fb256f0652c4f70d6256eb0b387d5f8b137add2f5d738325a1cb2d4\"" Jul 6 23:49:51.725938 containerd[1469]: time="2025-07-06T23:49:51.725893454Z" level=info msg="StartContainer for \"33c1160b2fb256f0652c4f70d6256eb0b387d5f8b137add2f5d738325a1cb2d4\"" Jul 6 23:49:51.767955 systemd[1]: Started cri-containerd-33c1160b2fb256f0652c4f70d6256eb0b387d5f8b137add2f5d738325a1cb2d4.scope - libcontainer container 33c1160b2fb256f0652c4f70d6256eb0b387d5f8b137add2f5d738325a1cb2d4. Jul 6 23:49:51.817581 systemd[1]: cri-containerd-33c1160b2fb256f0652c4f70d6256eb0b387d5f8b137add2f5d738325a1cb2d4.scope: Deactivated successfully. Jul 6 23:49:52.091886 kubelet[2511]: E0706 23:49:52.091780 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dkdw8" podUID="e9ac6c9c-1856-41b6-91f1-74ff39eba111" Jul 6 23:49:52.118320 containerd[1469]: time="2025-07-06T23:49:52.118209480Z" level=info msg="StartContainer for \"33c1160b2fb256f0652c4f70d6256eb0b387d5f8b137add2f5d738325a1cb2d4\" returns successfully" Jul 6 23:49:52.156233 kubelet[2511]: I0706 23:49:52.156196 2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:49:52.156696 kubelet[2511]: E0706 23:49:52.156652 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:52.202807 containerd[1469]: time="2025-07-06T23:49:52.202720141Z" level=info msg="shim disconnected" id=33c1160b2fb256f0652c4f70d6256eb0b387d5f8b137add2f5d738325a1cb2d4 namespace=k8s.io Jul 6 23:49:52.202807 containerd[1469]: time="2025-07-06T23:49:52.202777629Z" level=warning msg="cleaning up after shim disconnected" id=33c1160b2fb256f0652c4f70d6256eb0b387d5f8b137add2f5d738325a1cb2d4 namespace=k8s.io Jul 6 23:49:52.202807 containerd[1469]: time="2025-07-06T23:49:52.202786515Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:49:52.379040 systemd[1]: run-containerd-runc-k8s.io-33c1160b2fb256f0652c4f70d6256eb0b387d5f8b137add2f5d738325a1cb2d4-runc.mzCrl0.mount: Deactivated successfully. Jul 6 23:49:52.379171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33c1160b2fb256f0652c4f70d6256eb0b387d5f8b137add2f5d738325a1cb2d4-rootfs.mount: Deactivated successfully. Jul 6 23:49:53.160733 containerd[1469]: time="2025-07-06T23:49:53.160674265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 6 23:49:54.089177 kubelet[2511]: E0706 23:49:54.089099 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dkdw8" podUID="e9ac6c9c-1856-41b6-91f1-74ff39eba111" Jul 6 23:49:56.089977 kubelet[2511]: E0706 23:49:56.089890 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dkdw8" podUID="e9ac6c9c-1856-41b6-91f1-74ff39eba111" Jul 6 23:49:56.650105 containerd[1469]: time="2025-07-06T23:49:56.650037995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:56.650867 containerd[1469]: time="2025-07-06T23:49:56.650821759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 6 23:49:56.652791 containerd[1469]: time="2025-07-06T23:49:56.652743592Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:56.655517 containerd[1469]: time="2025-07-06T23:49:56.655475570Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:49:56.656222 containerd[1469]: time="2025-07-06T23:49:56.656178311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.495412744s" Jul 6 23:49:56.656222 containerd[1469]: time="2025-07-06T23:49:56.656223606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 6 23:49:56.658972 containerd[1469]: time="2025-07-06T23:49:56.658938200Z" level=info msg="CreateContainer within sandbox \"db4be5c5b80494ac14e2d18727c4a74250c86469cd50e5b49d090e2fc0915fda\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:49:56.677229 containerd[1469]: time="2025-07-06T23:49:56.677164532Z" level=info msg="CreateContainer within sandbox \"db4be5c5b80494ac14e2d18727c4a74250c86469cd50e5b49d090e2fc0915fda\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"91e2a92001a2e1e2b5f2f9d751f37b960d821f02f55a599d9174670b332e0484\"" Jul 6 23:49:56.677879 containerd[1469]: time="2025-07-06T23:49:56.677837517Z" level=info msg="StartContainer for \"91e2a92001a2e1e2b5f2f9d751f37b960d821f02f55a599d9174670b332e0484\"" Jul 6 23:49:56.718768 systemd[1]: Started cri-containerd-91e2a92001a2e1e2b5f2f9d751f37b960d821f02f55a599d9174670b332e0484.scope - libcontainer container 91e2a92001a2e1e2b5f2f9d751f37b960d821f02f55a599d9174670b332e0484. Jul 6 23:49:57.331235 containerd[1469]: time="2025-07-06T23:49:57.331148117Z" level=info msg="StartContainer for \"91e2a92001a2e1e2b5f2f9d751f37b960d821f02f55a599d9174670b332e0484\" returns successfully" Jul 6 23:49:58.089842 kubelet[2511]: E0706 23:49:58.089764 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dkdw8" podUID="e9ac6c9c-1856-41b6-91f1-74ff39eba111" Jul 6 23:49:58.591675 containerd[1469]: time="2025-07-06T23:49:58.591597763Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:49:58.596232 systemd[1]: cri-containerd-91e2a92001a2e1e2b5f2f9d751f37b960d821f02f55a599d9174670b332e0484.scope: Deactivated successfully. Jul 6 23:49:58.617330 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91e2a92001a2e1e2b5f2f9d751f37b960d821f02f55a599d9174670b332e0484-rootfs.mount: Deactivated successfully. Jul 6 23:49:58.620243 containerd[1469]: time="2025-07-06T23:49:58.620188896Z" level=info msg="shim disconnected" id=91e2a92001a2e1e2b5f2f9d751f37b960d821f02f55a599d9174670b332e0484 namespace=k8s.io Jul 6 23:49:58.620243 containerd[1469]: time="2025-07-06T23:49:58.620240432Z" level=warning msg="cleaning up after shim disconnected" id=91e2a92001a2e1e2b5f2f9d751f37b960d821f02f55a599d9174670b332e0484 namespace=k8s.io Jul 6 23:49:58.620372 containerd[1469]: time="2025-07-06T23:49:58.620249169Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:49:58.650235 kubelet[2511]: I0706 23:49:58.650183 2511 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:49:58.687570 kubelet[2511]: W0706 23:49:58.682076 2511 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: configmaps "goldmane" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jul 6 23:49:58.687570 kubelet[2511]: E0706 23:49:58.682144 2511 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"goldmane\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 6 23:49:58.687570 kubelet[2511]: W0706 23:49:58.682171 2511 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jul 6 23:49:58.687570 kubelet[2511]: E0706 23:49:58.682203 2511 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 6 23:49:58.689713 systemd[1]: Created slice kubepods-burstable-podbf7af7df_e789_4a3b_b647_3ff2fb52d715.slice - libcontainer container kubepods-burstable-podbf7af7df_e789_4a3b_b647_3ff2fb52d715.slice. Jul 6 23:49:58.697002 systemd[1]: Created slice kubepods-besteffort-podcd9fb964_0fb8_4877_9487_51dc490180f3.slice - libcontainer container kubepods-besteffort-podcd9fb964_0fb8_4877_9487_51dc490180f3.slice. Jul 6 23:49:58.706642 systemd[1]: Created slice kubepods-besteffort-pod9626945a_0af4_4eaa_ac43_a94044095a5d.slice - libcontainer container kubepods-besteffort-pod9626945a_0af4_4eaa_ac43_a94044095a5d.slice. Jul 6 23:49:58.714474 systemd[1]: Created slice kubepods-burstable-pod8f2c4407_fa06_42c0_b2df_cdbd60e8d1cd.slice - libcontainer container kubepods-burstable-pod8f2c4407_fa06_42c0_b2df_cdbd60e8d1cd.slice. Jul 6 23:49:58.721495 systemd[1]: Created slice kubepods-besteffort-pod3e5b7cb8_7d1b_4cad_a3a9_00603e8b2e51.slice - libcontainer container kubepods-besteffort-pod3e5b7cb8_7d1b_4cad_a3a9_00603e8b2e51.slice. Jul 6 23:49:58.722512 kubelet[2511]: I0706 23:49:58.722477 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kzth\" (UniqueName: \"kubernetes.io/projected/bf7af7df-e789-4a3b-b647-3ff2fb52d715-kube-api-access-6kzth\") pod \"coredns-7c65d6cfc9-xhnqf\" (UID: \"bf7af7df-e789-4a3b-b647-3ff2fb52d715\") " pod="kube-system/coredns-7c65d6cfc9-xhnqf" Jul 6 23:49:58.723512 kubelet[2511]: I0706 23:49:58.722672 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51-config\") pod \"goldmane-58fd7646b9-d9k6p\" (UID: \"3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51\") " pod="calico-system/goldmane-58fd7646b9-d9k6p" Jul 6 23:49:58.723512 kubelet[2511]: I0706 23:49:58.722704 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf72l\" (UniqueName: \"kubernetes.io/projected/9626945a-0af4-4eaa-ac43-a94044095a5d-kube-api-access-nf72l\") pod \"calico-apiserver-865bb6f9f-jwkmx\" (UID: \"9626945a-0af4-4eaa-ac43-a94044095a5d\") " pod="calico-apiserver/calico-apiserver-865bb6f9f-jwkmx" Jul 6 23:49:58.723512 kubelet[2511]: I0706 23:49:58.722727 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjm8c\" (UniqueName: \"kubernetes.io/projected/b6baffdc-d693-4f2e-98c3-45c2d2376ca7-kube-api-access-pjm8c\") pod \"calico-apiserver-865bb6f9f-bcvjq\" (UID: \"b6baffdc-d693-4f2e-98c3-45c2d2376ca7\") " pod="calico-apiserver/calico-apiserver-865bb6f9f-bcvjq" Jul 6 23:49:58.723512 kubelet[2511]: I0706 23:49:58.722750 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6baffdc-d693-4f2e-98c3-45c2d2376ca7-calico-apiserver-certs\") pod \"calico-apiserver-865bb6f9f-bcvjq\" (UID: \"b6baffdc-d693-4f2e-98c3-45c2d2376ca7\") " pod="calico-apiserver/calico-apiserver-865bb6f9f-bcvjq" Jul 6 23:49:58.723512 kubelet[2511]: I0706 23:49:58.722773 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p9wv\" (UniqueName: \"kubernetes.io/projected/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-kube-api-access-9p9wv\") pod \"whisker-556fc4889b-95bjd\" (UID: \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\") " pod="calico-system/whisker-556fc4889b-95bjd" Jul 6 23:49:58.723912 kubelet[2511]: I0706 23:49:58.722793 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cd9fb964-0fb8-4877-9487-51dc490180f3-tigera-ca-bundle\") pod \"calico-kube-controllers-7c44cf5b79-2q9kp\" (UID: \"cd9fb964-0fb8-4877-9487-51dc490180f3\") " pod="calico-system/calico-kube-controllers-7c44cf5b79-2q9kp" Jul 6 23:49:58.723912 kubelet[2511]: I0706 23:49:58.722817 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51-goldmane-key-pair\") pod \"goldmane-58fd7646b9-d9k6p\" (UID: \"3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51\") " pod="calico-system/goldmane-58fd7646b9-d9k6p" Jul 6 23:49:58.723912 kubelet[2511]: I0706 23:49:58.722838 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2b3a3ca4-77bc-49c7-8b23-f798452500a5-calico-apiserver-certs\") pod \"calico-apiserver-66947d49bf-bxk5j\" (UID: \"2b3a3ca4-77bc-49c7-8b23-f798452500a5\") " pod="calico-apiserver/calico-apiserver-66947d49bf-bxk5j" Jul 6 23:49:58.723912 kubelet[2511]: I0706 23:49:58.722887 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf7af7df-e789-4a3b-b647-3ff2fb52d715-config-volume\") pod \"coredns-7c65d6cfc9-xhnqf\" (UID: \"bf7af7df-e789-4a3b-b647-3ff2fb52d715\") " pod="kube-system/coredns-7c65d6cfc9-xhnqf" Jul 6 23:49:58.723912 kubelet[2511]: I0706 23:49:58.722906 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-whisker-ca-bundle\") pod \"whisker-556fc4889b-95bjd\" (UID: \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\") " pod="calico-system/whisker-556fc4889b-95bjd" Jul 6 23:49:58.724192 kubelet[2511]: I0706 23:49:58.722931 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqzd6\" (UniqueName: \"kubernetes.io/projected/8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd-kube-api-access-gqzd6\") pod \"coredns-7c65d6cfc9-8m7k9\" (UID: \"8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd\") " pod="kube-system/coredns-7c65d6cfc9-8m7k9" Jul 6 23:49:58.724192 kubelet[2511]: I0706 23:49:58.722953 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzspr\" (UniqueName: \"kubernetes.io/projected/cd9fb964-0fb8-4877-9487-51dc490180f3-kube-api-access-nzspr\") pod \"calico-kube-controllers-7c44cf5b79-2q9kp\" (UID: \"cd9fb964-0fb8-4877-9487-51dc490180f3\") " pod="calico-system/calico-kube-controllers-7c44cf5b79-2q9kp" Jul 6 23:49:58.724192 kubelet[2511]: I0706 23:49:58.722979 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v42zg\" (UniqueName: \"kubernetes.io/projected/3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51-kube-api-access-v42zg\") pod \"goldmane-58fd7646b9-d9k6p\" (UID: \"3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51\") " pod="calico-system/goldmane-58fd7646b9-d9k6p" Jul 6 23:49:58.724192 kubelet[2511]: I0706 23:49:58.723002 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whcpt\" (UniqueName: \"kubernetes.io/projected/2b3a3ca4-77bc-49c7-8b23-f798452500a5-kube-api-access-whcpt\") pod \"calico-apiserver-66947d49bf-bxk5j\" (UID: \"2b3a3ca4-77bc-49c7-8b23-f798452500a5\") " pod="calico-apiserver/calico-apiserver-66947d49bf-bxk5j" Jul 6 23:49:58.724192 kubelet[2511]: I0706 23:49:58.723023 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-whisker-backend-key-pair\") pod \"whisker-556fc4889b-95bjd\" (UID: \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\") " pod="calico-system/whisker-556fc4889b-95bjd" Jul 6 23:49:58.724421 kubelet[2511]: I0706 23:49:58.723045 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-d9k6p\" (UID: \"3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51\") " pod="calico-system/goldmane-58fd7646b9-d9k6p" Jul 6 23:49:58.724421 kubelet[2511]: I0706 23:49:58.723067 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9626945a-0af4-4eaa-ac43-a94044095a5d-calico-apiserver-certs\") pod \"calico-apiserver-865bb6f9f-jwkmx\" (UID: \"9626945a-0af4-4eaa-ac43-a94044095a5d\") " pod="calico-apiserver/calico-apiserver-865bb6f9f-jwkmx" Jul 6 23:49:58.724421 kubelet[2511]: I0706 23:49:58.723099 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd-config-volume\") pod \"coredns-7c65d6cfc9-8m7k9\" (UID: \"8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd\") " pod="kube-system/coredns-7c65d6cfc9-8m7k9" Jul 6 23:49:58.728988 systemd[1]: Created slice kubepods-besteffort-podb6baffdc_d693_4f2e_98c3_45c2d2376ca7.slice - libcontainer container kubepods-besteffort-podb6baffdc_d693_4f2e_98c3_45c2d2376ca7.slice. Jul 6 23:49:58.734708 systemd[1]: Created slice kubepods-besteffort-podc3cd80a1_b09b_46fa_82e4_f75c70ebcec0.slice - libcontainer container kubepods-besteffort-podc3cd80a1_b09b_46fa_82e4_f75c70ebcec0.slice. Jul 6 23:49:58.739865 systemd[1]: Created slice kubepods-besteffort-pod2b3a3ca4_77bc_49c7_8b23_f798452500a5.slice - libcontainer container kubepods-besteffort-pod2b3a3ca4_77bc_49c7_8b23_f798452500a5.slice. Jul 6 23:49:58.993642 kubelet[2511]: E0706 23:49:58.993515 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:58.994144 containerd[1469]: time="2025-07-06T23:49:58.994095692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xhnqf,Uid:bf7af7df-e789-4a3b-b647-3ff2fb52d715,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:59.002426 containerd[1469]: time="2025-07-06T23:49:59.002392067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c44cf5b79-2q9kp,Uid:cd9fb964-0fb8-4877-9487-51dc490180f3,Namespace:calico-system,Attempt:0,}" Jul 6 23:49:59.011633 containerd[1469]: time="2025-07-06T23:49:59.011305729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865bb6f9f-jwkmx,Uid:9626945a-0af4-4eaa-ac43-a94044095a5d,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:49:59.019029 kubelet[2511]: E0706 23:49:59.018984 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:49:59.019384 containerd[1469]: time="2025-07-06T23:49:59.019342102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8m7k9,Uid:8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd,Namespace:kube-system,Attempt:0,}" Jul 6 23:49:59.038513 containerd[1469]: time="2025-07-06T23:49:59.038318687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865bb6f9f-bcvjq,Uid:b6baffdc-d693-4f2e-98c3-45c2d2376ca7,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:49:59.040297 containerd[1469]: time="2025-07-06T23:49:59.040092872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-556fc4889b-95bjd,Uid:c3cd80a1-b09b-46fa-82e4-f75c70ebcec0,Namespace:calico-system,Attempt:0,}" Jul 6 23:49:59.042906 containerd[1469]: time="2025-07-06T23:49:59.042869139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66947d49bf-bxk5j,Uid:2b3a3ca4-77bc-49c7-8b23-f798452500a5,Namespace:calico-apiserver,Attempt:0,}" Jul 6 23:49:59.144373 containerd[1469]: time="2025-07-06T23:49:59.144327469Z" level=error msg="Failed to destroy network for sandbox \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.146777 containerd[1469]: time="2025-07-06T23:49:59.146720066Z" level=error msg="Failed to destroy network for sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.147162 containerd[1469]: time="2025-07-06T23:49:59.147123603Z" level=error msg="Failed to destroy network for sandbox \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.147262 containerd[1469]: time="2025-07-06T23:49:59.147146547Z" level=error msg="Failed to destroy network for sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.151912 containerd[1469]: time="2025-07-06T23:49:59.151870455Z" level=error msg="encountered an error cleaning up failed sandbox \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.152083 containerd[1469]: time="2025-07-06T23:49:59.152053649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8m7k9,Uid:8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.152396 containerd[1469]: time="2025-07-06T23:49:59.151883629Z" level=error msg="encountered an error cleaning up failed sandbox \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.152396 containerd[1469]: time="2025-07-06T23:49:59.152280706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xhnqf,Uid:bf7af7df-e789-4a3b-b647-3ff2fb52d715,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.153996 containerd[1469]: time="2025-07-06T23:49:59.152907444Z" level=error msg="encountered an error cleaning up failed sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.153996 containerd[1469]: time="2025-07-06T23:49:59.153003715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865bb6f9f-jwkmx,Uid:9626945a-0af4-4eaa-ac43-a94044095a5d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.154256 containerd[1469]: time="2025-07-06T23:49:59.154221905Z" level=error msg="encountered an error cleaning up failed sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.155677 containerd[1469]: time="2025-07-06T23:49:59.155625292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c44cf5b79-2q9kp,Uid:cd9fb964-0fb8-4877-9487-51dc490180f3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.165956 kubelet[2511]: E0706 23:49:59.165906 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.166440 kubelet[2511]: E0706 23:49:59.165986 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c44cf5b79-2q9kp" Jul 6 23:49:59.166440 kubelet[2511]: E0706 23:49:59.166008 2511 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c44cf5b79-2q9kp" Jul 6 23:49:59.166440 kubelet[2511]: E0706 23:49:59.166069 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c44cf5b79-2q9kp_calico-system(cd9fb964-0fb8-4877-9487-51dc490180f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c44cf5b79-2q9kp_calico-system(cd9fb964-0fb8-4877-9487-51dc490180f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c44cf5b79-2q9kp" podUID="cd9fb964-0fb8-4877-9487-51dc490180f3" Jul 6 23:49:59.166664 kubelet[2511]: E0706 23:49:59.166299 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.166664 kubelet[2511]: E0706 23:49:59.166317 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xhnqf" Jul 6 23:49:59.166664 kubelet[2511]: E0706 23:49:59.166330 2511 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-xhnqf" Jul 6 23:49:59.166787 kubelet[2511]: E0706 23:49:59.166352 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-xhnqf_kube-system(bf7af7df-e789-4a3b-b647-3ff2fb52d715)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-xhnqf_kube-system(bf7af7df-e789-4a3b-b647-3ff2fb52d715)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xhnqf" podUID="bf7af7df-e789-4a3b-b647-3ff2fb52d715" Jul 6 23:49:59.166787 kubelet[2511]: E0706 23:49:59.166381 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.166787 kubelet[2511]: E0706 23:49:59.166397 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865bb6f9f-jwkmx" Jul 6 23:49:59.166914 kubelet[2511]: E0706 23:49:59.166409 2511 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865bb6f9f-jwkmx" Jul 6 23:49:59.166914 kubelet[2511]: E0706 23:49:59.166427 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-865bb6f9f-jwkmx_calico-apiserver(9626945a-0af4-4eaa-ac43-a94044095a5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-865bb6f9f-jwkmx_calico-apiserver(9626945a-0af4-4eaa-ac43-a94044095a5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-865bb6f9f-jwkmx" podUID="9626945a-0af4-4eaa-ac43-a94044095a5d" Jul 6 23:49:59.167189 kubelet[2511]: E0706 23:49:59.167013 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.167189 kubelet[2511]: E0706 23:49:59.167095 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8m7k9" Jul 6 23:49:59.167189 kubelet[2511]: E0706 23:49:59.167116 2511 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-8m7k9" Jul 6 23:49:59.167309 kubelet[2511]: E0706 23:49:59.167152 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-8m7k9_kube-system(8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-8m7k9_kube-system(8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8m7k9" podUID="8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd" Jul 6 23:49:59.207118 containerd[1469]: time="2025-07-06T23:49:59.206573634Z" level=error msg="Failed to destroy network for sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.207118 containerd[1469]: time="2025-07-06T23:49:59.206986350Z" level=error msg="encountered an error cleaning up failed sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.207118 containerd[1469]: time="2025-07-06T23:49:59.207026835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865bb6f9f-bcvjq,Uid:b6baffdc-d693-4f2e-98c3-45c2d2376ca7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.208686 kubelet[2511]: E0706 23:49:59.207255 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.208686 kubelet[2511]: E0706 23:49:59.207330 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865bb6f9f-bcvjq" Jul 6 23:49:59.208686 kubelet[2511]: E0706 23:49:59.207351 2511 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-865bb6f9f-bcvjq" Jul 6 23:49:59.208776 kubelet[2511]: E0706 23:49:59.207393 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-865bb6f9f-bcvjq_calico-apiserver(b6baffdc-d693-4f2e-98c3-45c2d2376ca7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-865bb6f9f-bcvjq_calico-apiserver(b6baffdc-d693-4f2e-98c3-45c2d2376ca7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-865bb6f9f-bcvjq" podUID="b6baffdc-d693-4f2e-98c3-45c2d2376ca7" Jul 6 23:49:59.213050 containerd[1469]: time="2025-07-06T23:49:59.213006012Z" level=error msg="Failed to destroy network for sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.213429 containerd[1469]: time="2025-07-06T23:49:59.213406165Z" level=error msg="encountered an error cleaning up failed sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.213480 containerd[1469]: time="2025-07-06T23:49:59.213460446Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66947d49bf-bxk5j,Uid:2b3a3ca4-77bc-49c7-8b23-f798452500a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.214760 kubelet[2511]: E0706 23:49:59.214720 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.214815 kubelet[2511]: E0706 23:49:59.214773 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66947d49bf-bxk5j" Jul 6 23:49:59.214815 kubelet[2511]: E0706 23:49:59.214793 2511 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66947d49bf-bxk5j" Jul 6 23:49:59.214874 kubelet[2511]: E0706 23:49:59.214834 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66947d49bf-bxk5j_calico-apiserver(2b3a3ca4-77bc-49c7-8b23-f798452500a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66947d49bf-bxk5j_calico-apiserver(2b3a3ca4-77bc-49c7-8b23-f798452500a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66947d49bf-bxk5j" podUID="2b3a3ca4-77bc-49c7-8b23-f798452500a5" Jul 6 23:49:59.232696 containerd[1469]: time="2025-07-06T23:49:59.232624664Z" level=error msg="Failed to destroy network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.233042 containerd[1469]: time="2025-07-06T23:49:59.233012943Z" level=error msg="encountered an error cleaning up failed sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.233073 containerd[1469]: time="2025-07-06T23:49:59.233057467Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-556fc4889b-95bjd,Uid:c3cd80a1-b09b-46fa-82e4-f75c70ebcec0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.233287 kubelet[2511]: E0706 23:49:59.233250 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.233344 kubelet[2511]: E0706 23:49:59.233308 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-556fc4889b-95bjd" Jul 6 23:49:59.233344 kubelet[2511]: E0706 23:49:59.233327 2511 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-556fc4889b-95bjd" Jul 6 23:49:59.233399 kubelet[2511]: E0706 23:49:59.233366 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-556fc4889b-95bjd_calico-system(c3cd80a1-b09b-46fa-82e4-f75c70ebcec0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-556fc4889b-95bjd_calico-system(c3cd80a1-b09b-46fa-82e4-f75c70ebcec0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-556fc4889b-95bjd" podUID="c3cd80a1-b09b-46fa-82e4-f75c70ebcec0" Jul 6 23:49:59.345260 kubelet[2511]: I0706 23:49:59.345236 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:49:59.346403 kubelet[2511]: I0706 23:49:59.346352 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:49:59.387298 kubelet[2511]: I0706 23:49:59.387006 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" Jul 6 23:49:59.392968 kubelet[2511]: I0706 23:49:59.392489 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:49:59.396443 containerd[1469]: time="2025-07-06T23:49:59.394808420Z" level=info msg="StopPodSandbox for \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\"" Jul 6 23:49:59.396443 containerd[1469]: time="2025-07-06T23:49:59.394886608Z" level=info msg="StopPodSandbox for \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\"" Jul 6 23:49:59.396443 containerd[1469]: time="2025-07-06T23:49:59.394984251Z" level=info msg="Ensure that sandbox 18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a in task-service has been cleanup successfully" Jul 6 23:49:59.396443 containerd[1469]: time="2025-07-06T23:49:59.395105438Z" level=info msg="Ensure that sandbox 96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b in task-service has been cleanup successfully" Jul 6 23:49:59.396443 containerd[1469]: time="2025-07-06T23:49:59.396102262Z" level=info msg="StopPodSandbox for \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\"" Jul 6 23:49:59.396443 containerd[1469]: time="2025-07-06T23:49:59.396217208Z" level=info msg="Ensure that sandbox 1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77 in task-service has been cleanup successfully" Jul 6 23:49:59.398924 containerd[1469]: time="2025-07-06T23:49:59.398603884Z" level=info msg="StopPodSandbox for \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\"" Jul 6 23:49:59.398924 containerd[1469]: time="2025-07-06T23:49:59.398742534Z" level=info msg="Ensure that sandbox 758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93 in task-service has been cleanup successfully" Jul 6 23:49:59.405523 containerd[1469]: time="2025-07-06T23:49:59.404736469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 6 23:49:59.408658 kubelet[2511]: I0706 23:49:59.406749 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" Jul 6 23:49:59.408768 containerd[1469]: time="2025-07-06T23:49:59.407831576Z" level=info msg="StopPodSandbox for \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\"" Jul 6 23:49:59.408768 containerd[1469]: time="2025-07-06T23:49:59.407995714Z" level=info msg="Ensure that sandbox d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616 in task-service has been cleanup successfully" Jul 6 23:49:59.409662 kubelet[2511]: I0706 23:49:59.409627 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Jul 6 23:49:59.414274 containerd[1469]: time="2025-07-06T23:49:59.414226455Z" level=info msg="StopPodSandbox for \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\"" Jul 6 23:49:59.415661 kubelet[2511]: I0706 23:49:59.415619 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:49:59.417520 containerd[1469]: time="2025-07-06T23:49:59.417481021Z" level=info msg="Ensure that sandbox 2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f in task-service has been cleanup successfully" Jul 6 23:49:59.420313 containerd[1469]: time="2025-07-06T23:49:59.419711854Z" level=info msg="StopPodSandbox for \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\"" Jul 6 23:49:59.420919 containerd[1469]: time="2025-07-06T23:49:59.420890409Z" level=info msg="Ensure that sandbox 1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9 in task-service has been cleanup successfully" Jul 6 23:49:59.480386 containerd[1469]: time="2025-07-06T23:49:59.479258445Z" level=error msg="StopPodSandbox for \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\" failed" error="failed to destroy network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.480386 containerd[1469]: time="2025-07-06T23:49:59.479419127Z" level=error msg="StopPodSandbox for \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\" failed" error="failed to destroy network for sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.480594 kubelet[2511]: E0706 23:49:59.479642 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:49:59.480594 kubelet[2511]: E0706 23:49:59.479714 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b"} Jul 6 23:49:59.480594 kubelet[2511]: E0706 23:49:59.479794 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6baffdc-d693-4f2e-98c3-45c2d2376ca7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:49:59.480594 kubelet[2511]: E0706 23:49:59.479826 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6baffdc-d693-4f2e-98c3-45c2d2376ca7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-865bb6f9f-bcvjq" podUID="b6baffdc-d693-4f2e-98c3-45c2d2376ca7" Jul 6 23:49:59.480792 kubelet[2511]: E0706 23:49:59.479862 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:49:59.480792 kubelet[2511]: E0706 23:49:59.479882 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93"} Jul 6 23:49:59.480792 kubelet[2511]: E0706 23:49:59.479908 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:49:59.480792 kubelet[2511]: E0706 23:49:59.479929 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-556fc4889b-95bjd" podUID="c3cd80a1-b09b-46fa-82e4-f75c70ebcec0" Jul 6 23:49:59.481629 containerd[1469]: time="2025-07-06T23:49:59.481593434Z" level=error msg="StopPodSandbox for \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\" failed" error="failed to destroy network for sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.481806 kubelet[2511]: E0706 23:49:59.481778 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:49:59.481855 kubelet[2511]: E0706 23:49:59.481809 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a"} Jul 6 23:49:59.481855 kubelet[2511]: E0706 23:49:59.481839 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cd9fb964-0fb8-4877-9487-51dc490180f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:49:59.481931 kubelet[2511]: E0706 23:49:59.481860 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cd9fb964-0fb8-4877-9487-51dc490180f3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c44cf5b79-2q9kp" podUID="cd9fb964-0fb8-4877-9487-51dc490180f3" Jul 6 23:49:59.481989 containerd[1469]: time="2025-07-06T23:49:59.481934935Z" level=error msg="StopPodSandbox for \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\" failed" error="failed to destroy network for sandbox \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.482091 kubelet[2511]: E0706 23:49:59.482034 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" Jul 6 23:49:59.482142 kubelet[2511]: E0706 23:49:59.482096 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77"} Jul 6 23:49:59.482142 kubelet[2511]: E0706 23:49:59.482115 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf7af7df-e789-4a3b-b647-3ff2fb52d715\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:49:59.482142 kubelet[2511]: E0706 23:49:59.482132 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf7af7df-e789-4a3b-b647-3ff2fb52d715\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xhnqf" podUID="bf7af7df-e789-4a3b-b647-3ff2fb52d715" Jul 6 23:49:59.482677 containerd[1469]: time="2025-07-06T23:49:59.482647455Z" level=error msg="StopPodSandbox for \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\" failed" error="failed to destroy network for sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.482785 kubelet[2511]: E0706 23:49:59.482762 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:49:59.482823 kubelet[2511]: E0706 23:49:59.482786 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9"} Jul 6 23:49:59.482823 kubelet[2511]: E0706 23:49:59.482804 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2b3a3ca4-77bc-49c7-8b23-f798452500a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:49:59.482905 kubelet[2511]: E0706 23:49:59.482821 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2b3a3ca4-77bc-49c7-8b23-f798452500a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66947d49bf-bxk5j" podUID="2b3a3ca4-77bc-49c7-8b23-f798452500a5" Jul 6 23:49:59.483665 containerd[1469]: time="2025-07-06T23:49:59.483619311Z" level=error msg="StopPodSandbox for \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\" failed" error="failed to destroy network for sandbox \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.483781 kubelet[2511]: E0706 23:49:59.483754 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" Jul 6 23:49:59.483824 kubelet[2511]: E0706 23:49:59.483781 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616"} Jul 6 23:49:59.483824 kubelet[2511]: E0706 23:49:59.483800 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:49:59.483824 kubelet[2511]: E0706 23:49:59.483816 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8m7k9" podUID="8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd" Jul 6 23:49:59.486506 containerd[1469]: time="2025-07-06T23:49:59.486466682Z" level=error msg="StopPodSandbox for \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\" failed" error="failed to destroy network for sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:49:59.486671 kubelet[2511]: E0706 23:49:59.486587 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Jul 6 23:49:59.486671 kubelet[2511]: E0706 23:49:59.486610 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f"} Jul 6 23:49:59.486671 kubelet[2511]: E0706 23:49:59.486628 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9626945a-0af4-4eaa-ac43-a94044095a5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:49:59.486671 kubelet[2511]: E0706 23:49:59.486657 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9626945a-0af4-4eaa-ac43-a94044095a5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-865bb6f9f-jwkmx" podUID="9626945a-0af4-4eaa-ac43-a94044095a5d" Jul 6 23:49:59.824307 kubelet[2511]: E0706 23:49:59.824265 2511 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 6 23:49:59.824502 kubelet[2511]: E0706 23:49:59.824353 2511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51-config podName:3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51 nodeName:}" failed. No retries permitted until 2025-07-06 23:50:00.324336301 +0000 UTC m=+33.311662581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51-config") pod "goldmane-58fd7646b9-d9k6p" (UID: "3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51") : failed to sync configmap cache: timed out waiting for the condition Jul 6 23:49:59.824502 kubelet[2511]: E0706 23:49:59.824272 2511 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 6 23:49:59.824502 kubelet[2511]: E0706 23:49:59.824416 2511 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51-goldmane-key-pair podName:3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51 nodeName:}" failed. No retries permitted until 2025-07-06 23:50:00.32440489 +0000 UTC m=+33.311731169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51-goldmane-key-pair") pod "goldmane-58fd7646b9-d9k6p" (UID: "3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51") : failed to sync secret cache: timed out waiting for the condition Jul 6 23:50:00.095323 systemd[1]: Created slice kubepods-besteffort-pode9ac6c9c_1856_41b6_91f1_74ff39eba111.slice - libcontainer container kubepods-besteffort-pode9ac6c9c_1856_41b6_91f1_74ff39eba111.slice. Jul 6 23:50:00.105896 containerd[1469]: time="2025-07-06T23:50:00.105832665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dkdw8,Uid:e9ac6c9c-1856-41b6-91f1-74ff39eba111,Namespace:calico-system,Attempt:0,}" Jul 6 23:50:00.526860 containerd[1469]: time="2025-07-06T23:50:00.526738105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d9k6p,Uid:3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51,Namespace:calico-system,Attempt:0,}" Jul 6 23:50:00.768678 containerd[1469]: time="2025-07-06T23:50:00.768593449Z" level=error msg="Failed to destroy network for sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:00.769391 containerd[1469]: time="2025-07-06T23:50:00.769232410Z" level=error msg="encountered an error cleaning up failed sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:00.769391 containerd[1469]: time="2025-07-06T23:50:00.769282374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d9k6p,Uid:3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:00.769594 kubelet[2511]: E0706 23:50:00.769527 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:00.769936 kubelet[2511]: E0706 23:50:00.769630 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-d9k6p" Jul 6 23:50:00.769936 kubelet[2511]: E0706 23:50:00.769650 2511 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-d9k6p" Jul 6 23:50:00.769936 kubelet[2511]: E0706 23:50:00.769691 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-d9k6p_calico-system(3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-d9k6p_calico-system(3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-d9k6p" podUID="3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51" Jul 6 23:50:00.771717 containerd[1469]: time="2025-07-06T23:50:00.771677174Z" level=error msg="Failed to destroy network for sandbox \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:00.771958 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35-shm.mount: Deactivated successfully. Jul 6 23:50:00.772355 containerd[1469]: time="2025-07-06T23:50:00.772061916Z" level=error msg="encountered an error cleaning up failed sandbox \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:00.772355 containerd[1469]: time="2025-07-06T23:50:00.772105279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dkdw8,Uid:e9ac6c9c-1856-41b6-91f1-74ff39eba111,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:00.772430 kubelet[2511]: E0706 23:50:00.772258 2511 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:00.772430 kubelet[2511]: E0706 23:50:00.772294 2511 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dkdw8" Jul 6 23:50:00.772430 kubelet[2511]: E0706 23:50:00.772311 2511 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dkdw8" Jul 6 23:50:00.772514 kubelet[2511]: E0706 23:50:00.772341 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dkdw8_calico-system(e9ac6c9c-1856-41b6-91f1-74ff39eba111)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dkdw8_calico-system(e9ac6c9c-1856-41b6-91f1-74ff39eba111)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dkdw8" podUID="e9ac6c9c-1856-41b6-91f1-74ff39eba111" Jul 6 23:50:01.420602 kubelet[2511]: I0706 23:50:01.420517 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" Jul 6 23:50:01.421153 containerd[1469]: time="2025-07-06T23:50:01.421116409Z" level=info msg="StopPodSandbox for \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\"" Jul 6 23:50:01.421664 containerd[1469]: time="2025-07-06T23:50:01.421310683Z" level=info msg="Ensure that sandbox 2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503 in task-service has been cleanup successfully" Jul 6 23:50:01.421707 kubelet[2511]: I0706 23:50:01.421287 2511 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:01.421775 containerd[1469]: time="2025-07-06T23:50:01.421739439Z" level=info msg="StopPodSandbox for \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\"" Jul 6 23:50:01.421907 containerd[1469]: time="2025-07-06T23:50:01.421878400Z" level=info msg="Ensure that sandbox f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35 in task-service has been cleanup successfully" Jul 6 23:50:01.451556 containerd[1469]: time="2025-07-06T23:50:01.451479636Z" level=error msg="StopPodSandbox for \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\" failed" error="failed to destroy network for sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:01.451793 kubelet[2511]: E0706 23:50:01.451759 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:01.451852 kubelet[2511]: E0706 23:50:01.451806 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35"} Jul 6 23:50:01.451882 kubelet[2511]: E0706 23:50:01.451847 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:50:01.451963 kubelet[2511]: E0706 23:50:01.451876 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-d9k6p" podUID="3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51" Jul 6 23:50:01.453283 containerd[1469]: time="2025-07-06T23:50:01.453233131Z" level=error msg="StopPodSandbox for \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\" failed" error="failed to destroy network for sandbox \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:01.453408 kubelet[2511]: E0706 23:50:01.453364 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" Jul 6 23:50:01.453450 kubelet[2511]: E0706 23:50:01.453420 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503"} Jul 6 23:50:01.453478 kubelet[2511]: E0706 23:50:01.453453 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9ac6c9c-1856-41b6-91f1-74ff39eba111\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:50:01.453549 kubelet[2511]: E0706 23:50:01.453487 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9ac6c9c-1856-41b6-91f1-74ff39eba111\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dkdw8" podUID="e9ac6c9c-1856-41b6-91f1-74ff39eba111" Jul 6 23:50:01.687804 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503-shm.mount: Deactivated successfully. Jul 6 23:50:07.999381 kubelet[2511]: I0706 23:50:07.999270 2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:50:08.005607 kubelet[2511]: E0706 23:50:08.002527 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:08.432662 kubelet[2511]: E0706 23:50:08.432630 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:09.517360 systemd[1]: Started sshd@7-10.0.0.53:22-10.0.0.1:54020.service - OpenSSH per-connection server daemon (10.0.0.1:54020). Jul 6 23:50:09.578985 sshd[3807]: Accepted publickey for core from 10.0.0.1 port 54020 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:09.581238 sshd[3807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:09.587683 systemd-logind[1450]: New session 8 of user core. Jul 6 23:50:09.592969 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:50:09.768746 sshd[3807]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:09.774011 systemd[1]: sshd@7-10.0.0.53:22-10.0.0.1:54020.service: Deactivated successfully. Jul 6 23:50:09.776528 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:50:09.777556 systemd-logind[1450]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:50:09.778822 systemd-logind[1450]: Removed session 8. Jul 6 23:50:10.053006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969741480.mount: Deactivated successfully. Jul 6 23:50:11.090814 containerd[1469]: time="2025-07-06T23:50:11.090679853Z" level=info msg="StopPodSandbox for \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\"" Jul 6 23:50:12.091013 containerd[1469]: time="2025-07-06T23:50:12.090818064Z" level=info msg="StopPodSandbox for \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\"" Jul 6 23:50:12.091013 containerd[1469]: time="2025-07-06T23:50:12.090857538Z" level=info msg="StopPodSandbox for \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\"" Jul 6 23:50:12.091954 containerd[1469]: time="2025-07-06T23:50:12.090823144Z" level=info msg="StopPodSandbox for \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\"" Jul 6 23:50:12.091954 containerd[1469]: time="2025-07-06T23:50:12.091592198Z" level=info msg="StopPodSandbox for \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\"" Jul 6 23:50:12.200216 containerd[1469]: time="2025-07-06T23:50:12.200140201Z" level=error msg="StopPodSandbox for \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\" failed" error="failed to destroy network for sandbox \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:12.268899 containerd[1469]: time="2025-07-06T23:50:12.201259081Z" level=error msg="StopPodSandbox for \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\" failed" error="failed to destroy network for sandbox \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:12.268899 containerd[1469]: time="2025-07-06T23:50:12.208647949Z" level=error msg="StopPodSandbox for \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\" failed" error="failed to destroy network for sandbox \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:12.268899 containerd[1469]: time="2025-07-06T23:50:12.211135598Z" level=error msg="StopPodSandbox for \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\" failed" error="failed to destroy network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:12.268899 containerd[1469]: time="2025-07-06T23:50:12.213715140Z" level=error msg="StopPodSandbox for \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\" failed" error="failed to destroy network for sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 6 23:50:12.269168 kubelet[2511]: E0706 23:50:12.201262 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" Jul 6 23:50:12.269168 kubelet[2511]: E0706 23:50:12.201335 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77"} Jul 6 23:50:12.269168 kubelet[2511]: E0706 23:50:12.201375 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf7af7df-e789-4a3b-b647-3ff2fb52d715\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:50:12.269168 kubelet[2511]: E0706 23:50:12.201401 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf7af7df-e789-4a3b-b647-3ff2fb52d715\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-xhnqf" podUID="bf7af7df-e789-4a3b-b647-3ff2fb52d715" Jul 6 23:50:12.269971 kubelet[2511]: E0706 23:50:12.201446 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" Jul 6 23:50:12.269971 kubelet[2511]: E0706 23:50:12.201515 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503"} Jul 6 23:50:12.269971 kubelet[2511]: E0706 23:50:12.201610 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e9ac6c9c-1856-41b6-91f1-74ff39eba111\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:50:12.269971 kubelet[2511]: E0706 23:50:12.201642 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e9ac6c9c-1856-41b6-91f1-74ff39eba111\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dkdw8" podUID="e9ac6c9c-1856-41b6-91f1-74ff39eba111" Jul 6 23:50:12.270155 kubelet[2511]: E0706 23:50:12.208895 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" Jul 6 23:50:12.270155 kubelet[2511]: E0706 23:50:12.208971 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616"} Jul 6 23:50:12.270155 kubelet[2511]: E0706 23:50:12.209001 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:50:12.270155 kubelet[2511]: E0706 23:50:12.209026 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-8m7k9" podUID="8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd" Jul 6 23:50:12.270321 kubelet[2511]: E0706 23:50:12.211361 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:12.270321 kubelet[2511]: E0706 23:50:12.211398 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93"} Jul 6 23:50:12.270321 kubelet[2511]: E0706 23:50:12.211426 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:50:12.270321 kubelet[2511]: E0706 23:50:12.211450 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-556fc4889b-95bjd" podUID="c3cd80a1-b09b-46fa-82e4-f75c70ebcec0" Jul 6 23:50:12.270480 kubelet[2511]: E0706 23:50:12.213840 2511 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Jul 6 23:50:12.270480 kubelet[2511]: E0706 23:50:12.213864 2511 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f"} Jul 6 23:50:12.270480 kubelet[2511]: E0706 23:50:12.213883 2511 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9626945a-0af4-4eaa-ac43-a94044095a5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 6 23:50:12.270480 kubelet[2511]: E0706 23:50:12.213903 2511 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9626945a-0af4-4eaa-ac43-a94044095a5d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-865bb6f9f-jwkmx" podUID="9626945a-0af4-4eaa-ac43-a94044095a5d" Jul 6 23:50:12.857137 containerd[1469]: time="2025-07-06T23:50:12.857054381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:12.859709 containerd[1469]: time="2025-07-06T23:50:12.859631869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 6 23:50:12.862924 containerd[1469]: time="2025-07-06T23:50:12.862881348Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:12.871051 containerd[1469]: time="2025-07-06T23:50:12.870987533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:12.871856 containerd[1469]: time="2025-07-06T23:50:12.871814595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 13.467031016s" Jul 6 23:50:12.871856 containerd[1469]: time="2025-07-06T23:50:12.871850642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 6 23:50:12.882671 containerd[1469]: time="2025-07-06T23:50:12.882614706Z" level=info msg="CreateContainer within sandbox \"db4be5c5b80494ac14e2d18727c4a74250c86469cd50e5b49d090e2fc0915fda\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 6 23:50:12.911247 containerd[1469]: time="2025-07-06T23:50:12.911190619Z" level=info msg="CreateContainer within sandbox \"db4be5c5b80494ac14e2d18727c4a74250c86469cd50e5b49d090e2fc0915fda\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8fdca8578fe419e5d4cea044d878bb810e6f303b27b24e2ec5e1ffd107b0d95c\"" Jul 6 23:50:12.911805 containerd[1469]: time="2025-07-06T23:50:12.911769317Z" level=info msg="StartContainer for \"8fdca8578fe419e5d4cea044d878bb810e6f303b27b24e2ec5e1ffd107b0d95c\"" Jul 6 23:50:12.968717 systemd[1]: Started cri-containerd-8fdca8578fe419e5d4cea044d878bb810e6f303b27b24e2ec5e1ffd107b0d95c.scope - libcontainer container 8fdca8578fe419e5d4cea044d878bb810e6f303b27b24e2ec5e1ffd107b0d95c. Jul 6 23:50:13.004405 containerd[1469]: time="2025-07-06T23:50:13.004351172Z" level=info msg="StartContainer for \"8fdca8578fe419e5d4cea044d878bb810e6f303b27b24e2ec5e1ffd107b0d95c\" returns successfully" Jul 6 23:50:13.089104 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 6 23:50:13.089218 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 6 23:50:13.171251 containerd[1469]: time="2025-07-06T23:50:13.170695374Z" level=info msg="StopPodSandbox for \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\"" Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.246 [INFO][3984] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.247 [INFO][3984] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" iface="eth0" netns="/var/run/netns/cni-ccef2295-2035-aa65-366e-5128d8ac3a53" Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.247 [INFO][3984] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" iface="eth0" netns="/var/run/netns/cni-ccef2295-2035-aa65-366e-5128d8ac3a53" Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.247 [INFO][3984] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" iface="eth0" netns="/var/run/netns/cni-ccef2295-2035-aa65-366e-5128d8ac3a53" Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.247 [INFO][3984] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.247 [INFO][3984] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.316 [INFO][3995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" HandleID="k8s-pod-network.758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Workload="localhost-k8s-whisker--556fc4889b--95bjd-eth0" Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.317 [INFO][3995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.317 [INFO][3995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.324 [WARNING][3995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" HandleID="k8s-pod-network.758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Workload="localhost-k8s-whisker--556fc4889b--95bjd-eth0" Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.324 [INFO][3995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" HandleID="k8s-pod-network.758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Workload="localhost-k8s-whisker--556fc4889b--95bjd-eth0" Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.325 [INFO][3995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:13.332205 containerd[1469]: 2025-07-06 23:50:13.328 [INFO][3984] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:13.332837 containerd[1469]: time="2025-07-06T23:50:13.332359682Z" level=info msg="TearDown network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\" successfully" Jul 6 23:50:13.332837 containerd[1469]: time="2025-07-06T23:50:13.332401190Z" level=info msg="StopPodSandbox for \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\" returns successfully" Jul 6 23:50:13.417469 kubelet[2511]: I0706 23:50:13.417129 2511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-whisker-backend-key-pair\") pod \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\" (UID: \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\") " Jul 6 23:50:13.417469 kubelet[2511]: I0706 23:50:13.417378 2511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-whisker-ca-bundle\") pod \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\" (UID: \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\") " Jul 6 23:50:13.418198 kubelet[2511]: I0706 23:50:13.417685 2511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p9wv\" (UniqueName: \"kubernetes.io/projected/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-kube-api-access-9p9wv\") pod \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\" (UID: \"c3cd80a1-b09b-46fa-82e4-f75c70ebcec0\") " Jul 6 23:50:13.418198 kubelet[2511]: I0706 23:50:13.417967 2511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c3cd80a1-b09b-46fa-82e4-f75c70ebcec0" (UID: "c3cd80a1-b09b-46fa-82e4-f75c70ebcec0"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:50:13.422576 kubelet[2511]: I0706 23:50:13.422408 2511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c3cd80a1-b09b-46fa-82e4-f75c70ebcec0" (UID: "c3cd80a1-b09b-46fa-82e4-f75c70ebcec0"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:50:13.422734 kubelet[2511]: I0706 23:50:13.422679 2511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-kube-api-access-9p9wv" (OuterVolumeSpecName: "kube-api-access-9p9wv") pod "c3cd80a1-b09b-46fa-82e4-f75c70ebcec0" (UID: "c3cd80a1-b09b-46fa-82e4-f75c70ebcec0"). InnerVolumeSpecName "kube-api-access-9p9wv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:50:13.453699 systemd[1]: Removed slice kubepods-besteffort-podc3cd80a1_b09b_46fa_82e4_f75c70ebcec0.slice - libcontainer container kubepods-besteffort-podc3cd80a1_b09b_46fa_82e4_f75c70ebcec0.slice. Jul 6 23:50:13.518338 kubelet[2511]: I0706 23:50:13.518257 2511 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p9wv\" (UniqueName: \"kubernetes.io/projected/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-kube-api-access-9p9wv\") on node \"localhost\" DevicePath \"\"" Jul 6 23:50:13.518338 kubelet[2511]: I0706 23:50:13.518324 2511 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 6 23:50:13.518338 kubelet[2511]: I0706 23:50:13.518340 2511 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 6 23:50:13.864152 kubelet[2511]: I0706 23:50:13.864077 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9f7tx" podStartSLOduration=1.6972603720000001 podStartE2EDuration="25.864056849s" podCreationTimestamp="2025-07-06 23:49:48 +0000 UTC" firstStartedPulling="2025-07-06 23:49:48.705886338 +0000 UTC m=+21.693212617" lastFinishedPulling="2025-07-06 23:50:12.872682814 +0000 UTC m=+45.860009094" observedRunningTime="2025-07-06 23:50:13.856425126 +0000 UTC m=+46.843751415" watchObservedRunningTime="2025-07-06 23:50:13.864056849 +0000 UTC m=+46.851383128" Jul 6 23:50:13.873341 systemd[1]: Created slice kubepods-besteffort-pod136eb96a_92d0_4c56_9b1f_d808f5a7e5e8.slice - libcontainer container kubepods-besteffort-pod136eb96a_92d0_4c56_9b1f_d808f5a7e5e8.slice. Jul 6 23:50:13.883353 systemd[1]: run-netns-cni\x2dccef2295\x2d2035\x2daa65\x2d366e\x2d5128d8ac3a53.mount: Deactivated successfully. Jul 6 23:50:13.883488 systemd[1]: var-lib-kubelet-pods-c3cd80a1\x2db09b\x2d46fa\x2d82e4\x2df75c70ebcec0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9p9wv.mount: Deactivated successfully. Jul 6 23:50:13.883591 systemd[1]: var-lib-kubelet-pods-c3cd80a1\x2db09b\x2d46fa\x2d82e4\x2df75c70ebcec0-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 6 23:50:13.921464 kubelet[2511]: I0706 23:50:13.921423 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/136eb96a-92d0-4c56-9b1f-d808f5a7e5e8-whisker-backend-key-pair\") pod \"whisker-6f9cd94d88-l6xgj\" (UID: \"136eb96a-92d0-4c56-9b1f-d808f5a7e5e8\") " pod="calico-system/whisker-6f9cd94d88-l6xgj" Jul 6 23:50:13.921464 kubelet[2511]: I0706 23:50:13.921464 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/136eb96a-92d0-4c56-9b1f-d808f5a7e5e8-whisker-ca-bundle\") pod \"whisker-6f9cd94d88-l6xgj\" (UID: \"136eb96a-92d0-4c56-9b1f-d808f5a7e5e8\") " pod="calico-system/whisker-6f9cd94d88-l6xgj" Jul 6 23:50:13.921606 kubelet[2511]: I0706 23:50:13.921480 2511 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8bpd\" (UniqueName: \"kubernetes.io/projected/136eb96a-92d0-4c56-9b1f-d808f5a7e5e8-kube-api-access-m8bpd\") pod \"whisker-6f9cd94d88-l6xgj\" (UID: \"136eb96a-92d0-4c56-9b1f-d808f5a7e5e8\") " pod="calico-system/whisker-6f9cd94d88-l6xgj" Jul 6 23:50:14.090822 containerd[1469]: time="2025-07-06T23:50:14.090667819Z" level=info msg="StopPodSandbox for \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\"" Jul 6 23:50:14.090822 containerd[1469]: time="2025-07-06T23:50:14.090777835Z" level=info msg="StopPodSandbox for \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\"" Jul 6 23:50:14.091352 containerd[1469]: time="2025-07-06T23:50:14.090777324Z" level=info msg="StopPodSandbox for \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\"" Jul 6 23:50:14.180332 containerd[1469]: time="2025-07-06T23:50:14.180023740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9cd94d88-l6xgj,Uid:136eb96a-92d0-4c56-9b1f-d808f5a7e5e8,Namespace:calico-system,Attempt:0,}" Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.138 [INFO][4050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.140 [INFO][4050] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" iface="eth0" netns="/var/run/netns/cni-54b72cbc-703d-46f6-28fe-4122a0918954" Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.141 [INFO][4050] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" iface="eth0" netns="/var/run/netns/cni-54b72cbc-703d-46f6-28fe-4122a0918954" Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.141 [INFO][4050] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" iface="eth0" netns="/var/run/netns/cni-54b72cbc-703d-46f6-28fe-4122a0918954" Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.141 [INFO][4050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.141 [INFO][4050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.174 [INFO][4074] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" HandleID="k8s-pod-network.1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.174 [INFO][4074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.174 [INFO][4074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.180 [WARNING][4074] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" HandleID="k8s-pod-network.1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.180 [INFO][4074] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" HandleID="k8s-pod-network.1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.182 [INFO][4074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:14.189392 containerd[1469]: 2025-07-06 23:50:14.186 [INFO][4050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:14.190327 containerd[1469]: time="2025-07-06T23:50:14.190121600Z" level=info msg="TearDown network for sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\" successfully" Jul 6 23:50:14.190327 containerd[1469]: time="2025-07-06T23:50:14.190158139Z" level=info msg="StopPodSandbox for \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\" returns successfully" Jul 6 23:50:14.193373 containerd[1469]: time="2025-07-06T23:50:14.193286820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66947d49bf-bxk5j,Uid:2b3a3ca4-77bc-49c7-8b23-f798452500a5,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:50:14.194009 systemd[1]: run-netns-cni\x2d54b72cbc\x2d703d\x2d46f6\x2d28fe\x2d4122a0918954.mount: Deactivated successfully. Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.148 [INFO][4051] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.148 [INFO][4051] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" iface="eth0" netns="/var/run/netns/cni-823fef03-0bc5-ba4c-9459-25cba0366d40" Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.149 [INFO][4051] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" iface="eth0" netns="/var/run/netns/cni-823fef03-0bc5-ba4c-9459-25cba0366d40" Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.149 [INFO][4051] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" iface="eth0" netns="/var/run/netns/cni-823fef03-0bc5-ba4c-9459-25cba0366d40" Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.149 [INFO][4051] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.149 [INFO][4051] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.191 [INFO][4080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" HandleID="k8s-pod-network.96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.191 [INFO][4080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.191 [INFO][4080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.199 [WARNING][4080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" HandleID="k8s-pod-network.96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.199 [INFO][4080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" HandleID="k8s-pod-network.96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.203 [INFO][4080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:14.208752 containerd[1469]: 2025-07-06 23:50:14.205 [INFO][4051] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:14.209244 containerd[1469]: time="2025-07-06T23:50:14.209196488Z" level=info msg="TearDown network for sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\" successfully" Jul 6 23:50:14.209244 containerd[1469]: time="2025-07-06T23:50:14.209230942Z" level=info msg="StopPodSandbox for \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\" returns successfully" Jul 6 23:50:14.213346 containerd[1469]: time="2025-07-06T23:50:14.211928464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865bb6f9f-bcvjq,Uid:b6baffdc-d693-4f2e-98c3-45c2d2376ca7,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:50:14.212921 systemd[1]: run-netns-cni\x2d823fef03\x2d0bc5\x2dba4c\x2d9459\x2d25cba0366d40.mount: Deactivated successfully. Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.163 [INFO][4059] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.164 [INFO][4059] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" iface="eth0" netns="/var/run/netns/cni-583f9fb5-e276-35aa-59b2-41e9efbbf854" Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.164 [INFO][4059] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" iface="eth0" netns="/var/run/netns/cni-583f9fb5-e276-35aa-59b2-41e9efbbf854" Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.164 [INFO][4059] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" iface="eth0" netns="/var/run/netns/cni-583f9fb5-e276-35aa-59b2-41e9efbbf854" Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.164 [INFO][4059] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.164 [INFO][4059] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.196 [INFO][4088] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" HandleID="k8s-pod-network.18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.197 [INFO][4088] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.203 [INFO][4088] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.211 [WARNING][4088] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" HandleID="k8s-pod-network.18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.211 [INFO][4088] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" HandleID="k8s-pod-network.18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.213 [INFO][4088] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:14.220179 containerd[1469]: 2025-07-06 23:50:14.216 [INFO][4059] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:14.220757 containerd[1469]: time="2025-07-06T23:50:14.220376999Z" level=info msg="TearDown network for sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\" successfully" Jul 6 23:50:14.220757 containerd[1469]: time="2025-07-06T23:50:14.220394262Z" level=info msg="StopPodSandbox for \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\" returns successfully" Jul 6 23:50:14.220891 containerd[1469]: time="2025-07-06T23:50:14.220865025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c44cf5b79-2q9kp,Uid:cd9fb964-0fb8-4877-9487-51dc490180f3,Namespace:calico-system,Attempt:1,}" Jul 6 23:50:14.468816 systemd-networkd[1393]: calib9a9a806e14: Link UP Jul 6 23:50:14.474507 systemd-networkd[1393]: calib9a9a806e14: Gained carrier Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.311 [INFO][4101] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.325 [INFO][4101] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0 whisker-6f9cd94d88- calico-system 136eb96a-92d0-4c56-9b1f-d808f5a7e5e8 1037 0 2025-07-06 23:50:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6f9cd94d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6f9cd94d88-l6xgj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib9a9a806e14 [] [] }} ContainerID="6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" Namespace="calico-system" Pod="whisker-6f9cd94d88-l6xgj" WorkloadEndpoint="localhost-k8s-whisker--6f9cd94d88--l6xgj-" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.325 [INFO][4101] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" Namespace="calico-system" Pod="whisker-6f9cd94d88-l6xgj" WorkloadEndpoint="localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.357 [INFO][4153] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" HandleID="k8s-pod-network.6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" Workload="localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.357 [INFO][4153] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" HandleID="k8s-pod-network.6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" Workload="localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138470), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6f9cd94d88-l6xgj", "timestamp":"2025-07-06 23:50:14.357649735 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.357 [INFO][4153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.357 [INFO][4153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.357 [INFO][4153] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.367 [INFO][4153] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" host="localhost" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.374 [INFO][4153] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.379 [INFO][4153] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.382 [INFO][4153] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.386 [INFO][4153] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.386 [INFO][4153] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" host="localhost" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.390 [INFO][4153] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163 Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.396 [INFO][4153] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" host="localhost" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.448 [INFO][4153] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" host="localhost" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.448 [INFO][4153] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" host="localhost" Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.448 [INFO][4153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:14.531776 containerd[1469]: 2025-07-06 23:50:14.448 [INFO][4153] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" HandleID="k8s-pod-network.6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" Workload="localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0" Jul 6 23:50:14.533313 containerd[1469]: 2025-07-06 23:50:14.452 [INFO][4101] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" Namespace="calico-system" Pod="whisker-6f9cd94d88-l6xgj" WorkloadEndpoint="localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0", GenerateName:"whisker-6f9cd94d88-", Namespace:"calico-system", SelfLink:"", UID:"136eb96a-92d0-4c56-9b1f-d808f5a7e5e8", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 50, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f9cd94d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6f9cd94d88-l6xgj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib9a9a806e14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:14.533313 containerd[1469]: 2025-07-06 23:50:14.452 [INFO][4101] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" Namespace="calico-system" Pod="whisker-6f9cd94d88-l6xgj" WorkloadEndpoint="localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0" Jul 6 23:50:14.533313 containerd[1469]: 2025-07-06 23:50:14.452 [INFO][4101] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9a9a806e14 ContainerID="6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" Namespace="calico-system" Pod="whisker-6f9cd94d88-l6xgj" WorkloadEndpoint="localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0" Jul 6 23:50:14.533313 containerd[1469]: 2025-07-06 23:50:14.480 [INFO][4101] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" Namespace="calico-system" Pod="whisker-6f9cd94d88-l6xgj" WorkloadEndpoint="localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0" Jul 6 23:50:14.533313 containerd[1469]: 2025-07-06 23:50:14.483 [INFO][4101] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" Namespace="calico-system" Pod="whisker-6f9cd94d88-l6xgj" WorkloadEndpoint="localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0", GenerateName:"whisker-6f9cd94d88-", Namespace:"calico-system", SelfLink:"", UID:"136eb96a-92d0-4c56-9b1f-d808f5a7e5e8", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 50, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6f9cd94d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163", Pod:"whisker-6f9cd94d88-l6xgj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib9a9a806e14", MAC:"f2:e5:85:cc:32:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:14.533313 containerd[1469]: 2025-07-06 23:50:14.514 [INFO][4101] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163" Namespace="calico-system" Pod="whisker-6f9cd94d88-l6xgj" WorkloadEndpoint="localhost-k8s-whisker--6f9cd94d88--l6xgj-eth0" Jul 6 23:50:14.642034 systemd-networkd[1393]: cali119b2ad589c: Link UP Jul 6 23:50:14.643008 systemd-networkd[1393]: cali119b2ad589c: Gained carrier Jul 6 23:50:14.677635 containerd[1469]: time="2025-07-06T23:50:14.676831473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:50:14.677635 containerd[1469]: time="2025-07-06T23:50:14.676927152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:50:14.677635 containerd[1469]: time="2025-07-06T23:50:14.676938363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:14.677635 containerd[1469]: time="2025-07-06T23:50:14.677038943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.347 [INFO][4120] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.358 [INFO][4120] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0 calico-apiserver-865bb6f9f- calico-apiserver b6baffdc-d693-4f2e-98c3-45c2d2376ca7 1043 0 2025-07-06 23:49:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:865bb6f9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-865bb6f9f-bcvjq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali119b2ad589c [] [] }} ContainerID="967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-bcvjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.358 [INFO][4120] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-bcvjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.406 [INFO][4175] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" HandleID="k8s-pod-network.967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.406 [INFO][4175] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" HandleID="k8s-pod-network.967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005147e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-865bb6f9f-bcvjq", "timestamp":"2025-07-06 23:50:14.406516331 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.407 [INFO][4175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.448 [INFO][4175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.448 [INFO][4175] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.472 [INFO][4175] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" host="localhost" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.488 [INFO][4175] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.526 [INFO][4175] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.530 [INFO][4175] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.535 [INFO][4175] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.535 [INFO][4175] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" host="localhost" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.537 [INFO][4175] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.572 [INFO][4175] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" host="localhost" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.633 [INFO][4175] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" host="localhost" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.634 [INFO][4175] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" host="localhost" Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.634 [INFO][4175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:14.677901 containerd[1469]: 2025-07-06 23:50:14.634 [INFO][4175] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" HandleID="k8s-pod-network.967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:14.678740 containerd[1469]: 2025-07-06 23:50:14.638 [INFO][4120] cni-plugin/k8s.go 418: Populated endpoint ContainerID="967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-bcvjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0", GenerateName:"calico-apiserver-865bb6f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6baffdc-d693-4f2e-98c3-45c2d2376ca7", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865bb6f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-865bb6f9f-bcvjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali119b2ad589c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:14.678740 containerd[1469]: 2025-07-06 23:50:14.638 [INFO][4120] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-bcvjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:14.678740 containerd[1469]: 2025-07-06 23:50:14.638 [INFO][4120] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali119b2ad589c ContainerID="967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-bcvjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:14.678740 containerd[1469]: 2025-07-06 23:50:14.643 [INFO][4120] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-bcvjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:14.678740 containerd[1469]: 2025-07-06 23:50:14.644 [INFO][4120] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-bcvjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0", GenerateName:"calico-apiserver-865bb6f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6baffdc-d693-4f2e-98c3-45c2d2376ca7", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865bb6f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a", Pod:"calico-apiserver-865bb6f9f-bcvjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali119b2ad589c", MAC:"06:68:81:88:80:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:14.678740 containerd[1469]: 2025-07-06 23:50:14.658 [INFO][4120] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-bcvjq" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:14.707226 systemd[1]: Started cri-containerd-6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163.scope - libcontainer container 6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163. Jul 6 23:50:14.729432 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:50:14.732709 containerd[1469]: time="2025-07-06T23:50:14.732571598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:50:14.740559 containerd[1469]: time="2025-07-06T23:50:14.739282864Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:50:14.740559 containerd[1469]: time="2025-07-06T23:50:14.739334591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:14.740559 containerd[1469]: time="2025-07-06T23:50:14.739472099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:14.752728 systemd-networkd[1393]: calif0c0403a82a: Link UP Jul 6 23:50:14.764680 systemd-networkd[1393]: calif0c0403a82a: Gained carrier Jul 6 23:50:14.766614 systemd[1]: Started cri-containerd-967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a.scope - libcontainer container 967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a. Jul 6 23:50:14.777986 systemd[1]: Started sshd@8-10.0.0.53:22-10.0.0.1:54028.service - OpenSSH per-connection server daemon (10.0.0.1:54028). Jul 6 23:50:14.793833 containerd[1469]: time="2025-07-06T23:50:14.793687783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6f9cd94d88-l6xgj,Uid:136eb96a-92d0-4c56-9b1f-d808f5a7e5e8,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163\"" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.339 [INFO][4112] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.356 [INFO][4112] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0 calico-apiserver-66947d49bf- calico-apiserver 2b3a3ca4-77bc-49c7-8b23-f798452500a5 1042 0 2025-07-06 23:49:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66947d49bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-66947d49bf-bxk5j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif0c0403a82a [] [] }} ContainerID="de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" Namespace="calico-apiserver" Pod="calico-apiserver-66947d49bf-bxk5j" WorkloadEndpoint="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.356 [INFO][4112] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" Namespace="calico-apiserver" Pod="calico-apiserver-66947d49bf-bxk5j" WorkloadEndpoint="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.407 [INFO][4166] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" HandleID="k8s-pod-network.de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.411 [INFO][4166] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" HandleID="k8s-pod-network.de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00050b530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-66947d49bf-bxk5j", "timestamp":"2025-07-06 23:50:14.407111858 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.411 [INFO][4166] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.634 [INFO][4166] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.634 [INFO][4166] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.640 [INFO][4166] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" host="localhost" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.650 [INFO][4166] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.668 [INFO][4166] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.671 [INFO][4166] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.675 [INFO][4166] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.675 [INFO][4166] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" host="localhost" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.678 [INFO][4166] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050 Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.683 [INFO][4166] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" host="localhost" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.688 [INFO][4166] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" host="localhost" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.688 [INFO][4166] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" host="localhost" Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.688 [INFO][4166] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:14.796481 containerd[1469]: 2025-07-06 23:50:14.688 [INFO][4166] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" HandleID="k8s-pod-network.de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:14.797964 containerd[1469]: 2025-07-06 23:50:14.723 [INFO][4112] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" Namespace="calico-apiserver" Pod="calico-apiserver-66947d49bf-bxk5j" WorkloadEndpoint="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0", GenerateName:"calico-apiserver-66947d49bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b3a3ca4-77bc-49c7-8b23-f798452500a5", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66947d49bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-66947d49bf-bxk5j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0c0403a82a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:14.797964 containerd[1469]: 2025-07-06 23:50:14.726 [INFO][4112] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" Namespace="calico-apiserver" Pod="calico-apiserver-66947d49bf-bxk5j" WorkloadEndpoint="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:14.797964 containerd[1469]: 2025-07-06 23:50:14.726 [INFO][4112] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0c0403a82a ContainerID="de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" Namespace="calico-apiserver" Pod="calico-apiserver-66947d49bf-bxk5j" WorkloadEndpoint="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:14.797964 containerd[1469]: 2025-07-06 23:50:14.773 [INFO][4112] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" Namespace="calico-apiserver" Pod="calico-apiserver-66947d49bf-bxk5j" WorkloadEndpoint="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:14.797964 containerd[1469]: 2025-07-06 23:50:14.775 [INFO][4112] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" Namespace="calico-apiserver" Pod="calico-apiserver-66947d49bf-bxk5j" WorkloadEndpoint="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0", GenerateName:"calico-apiserver-66947d49bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b3a3ca4-77bc-49c7-8b23-f798452500a5", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66947d49bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050", Pod:"calico-apiserver-66947d49bf-bxk5j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0c0403a82a", MAC:"ce:7e:22:1b:a5:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:14.797964 containerd[1469]: 2025-07-06 23:50:14.786 [INFO][4112] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050" Namespace="calico-apiserver" Pod="calico-apiserver-66947d49bf-bxk5j" WorkloadEndpoint="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:14.802257 containerd[1469]: time="2025-07-06T23:50:14.801979444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 6 23:50:14.823677 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:50:14.824894 systemd-networkd[1393]: cali634b6b6742e: Link UP Jul 6 23:50:14.825672 systemd-networkd[1393]: cali634b6b6742e: Gained carrier Jul 6 23:50:14.834683 containerd[1469]: time="2025-07-06T23:50:14.834299017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:50:14.834683 containerd[1469]: time="2025-07-06T23:50:14.834359420Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:50:14.834683 containerd[1469]: time="2025-07-06T23:50:14.834372545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:14.834683 containerd[1469]: time="2025-07-06T23:50:14.834456563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:14.846500 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 54028 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:14.847673 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:14.857777 systemd[1]: Started cri-containerd-de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050.scope - libcontainer container de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050. Jul 6 23:50:14.861457 systemd-logind[1450]: New session 9 of user core. Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.347 [INFO][4127] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.356 [INFO][4127] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0 calico-kube-controllers-7c44cf5b79- calico-system cd9fb964-0fb8-4877-9487-51dc490180f3 1044 0 2025-07-06 23:49:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c44cf5b79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c44cf5b79-2q9kp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali634b6b6742e [] [] }} ContainerID="3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" Namespace="calico-system" Pod="calico-kube-controllers-7c44cf5b79-2q9kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.356 [INFO][4127] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" Namespace="calico-system" Pod="calico-kube-controllers-7c44cf5b79-2q9kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.413 [INFO][4173] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" HandleID="k8s-pod-network.3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.414 [INFO][4173] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" HandleID="k8s-pod-network.3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139480), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c44cf5b79-2q9kp", "timestamp":"2025-07-06 23:50:14.413774202 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.414 [INFO][4173] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.689 [INFO][4173] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.689 [INFO][4173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.745 [INFO][4173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" host="localhost" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.766 [INFO][4173] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.775 [INFO][4173] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.786 [INFO][4173] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.791 [INFO][4173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.791 [INFO][4173] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" host="localhost" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.793 [INFO][4173] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.797 [INFO][4173] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" host="localhost" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.808 [INFO][4173] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" host="localhost" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.809 [INFO][4173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" host="localhost" Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.809 [INFO][4173] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:14.861986 containerd[1469]: 2025-07-06 23:50:14.809 [INFO][4173] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" HandleID="k8s-pod-network.3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:14.862487 containerd[1469]: 2025-07-06 23:50:14.818 [INFO][4127] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" Namespace="calico-system" Pod="calico-kube-controllers-7c44cf5b79-2q9kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0", GenerateName:"calico-kube-controllers-7c44cf5b79-", Namespace:"calico-system", SelfLink:"", UID:"cd9fb964-0fb8-4877-9487-51dc490180f3", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c44cf5b79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c44cf5b79-2q9kp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali634b6b6742e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:14.862487 containerd[1469]: 2025-07-06 23:50:14.818 [INFO][4127] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" Namespace="calico-system" Pod="calico-kube-controllers-7c44cf5b79-2q9kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:14.862487 containerd[1469]: 2025-07-06 23:50:14.818 [INFO][4127] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali634b6b6742e ContainerID="3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" Namespace="calico-system" Pod="calico-kube-controllers-7c44cf5b79-2q9kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:14.862487 containerd[1469]: 2025-07-06 23:50:14.827 [INFO][4127] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" Namespace="calico-system" Pod="calico-kube-controllers-7c44cf5b79-2q9kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:14.862487 containerd[1469]: 2025-07-06 23:50:14.828 [INFO][4127] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" Namespace="calico-system" Pod="calico-kube-controllers-7c44cf5b79-2q9kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0", GenerateName:"calico-kube-controllers-7c44cf5b79-", Namespace:"calico-system", SelfLink:"", UID:"cd9fb964-0fb8-4877-9487-51dc490180f3", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c44cf5b79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b", Pod:"calico-kube-controllers-7c44cf5b79-2q9kp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali634b6b6742e", MAC:"c6:51:d6:fb:d5:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:14.862487 containerd[1469]: 2025-07-06 23:50:14.851 [INFO][4127] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b" Namespace="calico-system" Pod="calico-kube-controllers-7c44cf5b79-2q9kp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:14.863857 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:50:14.874955 containerd[1469]: time="2025-07-06T23:50:14.874903969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865bb6f9f-bcvjq,Uid:b6baffdc-d693-4f2e-98c3-45c2d2376ca7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a\"" Jul 6 23:50:14.891028 systemd[1]: run-netns-cni\x2d583f9fb5\x2de276\x2d35aa\x2d59b2\x2d41e9efbbf854.mount: Deactivated successfully. Jul 6 23:50:14.898449 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:50:14.907570 containerd[1469]: time="2025-07-06T23:50:14.901531160Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:50:14.907570 containerd[1469]: time="2025-07-06T23:50:14.904731646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:50:14.907570 containerd[1469]: time="2025-07-06T23:50:14.904792660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:14.907570 containerd[1469]: time="2025-07-06T23:50:14.904974551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:14.918638 kernel: bpftool[4505]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 6 23:50:14.933168 systemd[1]: Started cri-containerd-3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b.scope - libcontainer container 3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b. Jul 6 23:50:14.940519 containerd[1469]: time="2025-07-06T23:50:14.940468162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66947d49bf-bxk5j,Uid:2b3a3ca4-77bc-49c7-8b23-f798452500a5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050\"" Jul 6 23:50:14.960003 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:50:14.991544 containerd[1469]: time="2025-07-06T23:50:14.991195610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c44cf5b79-2q9kp,Uid:cd9fb964-0fb8-4877-9487-51dc490180f3,Namespace:calico-system,Attempt:1,} returns sandbox id \"3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b\"" Jul 6 23:50:15.034493 sshd[4386]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:15.038355 systemd[1]: sshd@8-10.0.0.53:22-10.0.0.1:54028.service: Deactivated successfully. Jul 6 23:50:15.040910 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:50:15.043194 systemd-logind[1450]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:50:15.044210 systemd-logind[1450]: Removed session 9. Jul 6 23:50:15.092962 kubelet[2511]: I0706 23:50:15.092923 2511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3cd80a1-b09b-46fa-82e4-f75c70ebcec0" path="/var/lib/kubelet/pods/c3cd80a1-b09b-46fa-82e4-f75c70ebcec0/volumes" Jul 6 23:50:15.192230 systemd-networkd[1393]: vxlan.calico: Link UP Jul 6 23:50:15.192252 systemd-networkd[1393]: vxlan.calico: Gained carrier Jul 6 23:50:15.946715 systemd-networkd[1393]: cali634b6b6742e: Gained IPv6LL Jul 6 23:50:16.090367 containerd[1469]: time="2025-07-06T23:50:16.090319479Z" level=info msg="StopPodSandbox for \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\"" Jul 6 23:50:16.139819 systemd-networkd[1393]: calib9a9a806e14: Gained IPv6LL Jul 6 23:50:16.330824 systemd-networkd[1393]: cali119b2ad589c: Gained IPv6LL Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.333 [INFO][4625] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.333 [INFO][4625] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" iface="eth0" netns="/var/run/netns/cni-0434b1f9-efe8-5c7f-f81c-29777cf5c9b5" Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.334 [INFO][4625] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" iface="eth0" netns="/var/run/netns/cni-0434b1f9-efe8-5c7f-f81c-29777cf5c9b5" Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.334 [INFO][4625] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" iface="eth0" netns="/var/run/netns/cni-0434b1f9-efe8-5c7f-f81c-29777cf5c9b5" Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.334 [INFO][4625] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.334 [INFO][4625] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.361 [INFO][4634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" HandleID="k8s-pod-network.f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.361 [INFO][4634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.361 [INFO][4634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.367 [WARNING][4634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" HandleID="k8s-pod-network.f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.369 [INFO][4634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" HandleID="k8s-pod-network.f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.371 [INFO][4634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:16.378890 containerd[1469]: 2025-07-06 23:50:16.375 [INFO][4625] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:16.380718 containerd[1469]: time="2025-07-06T23:50:16.380671585Z" level=info msg="TearDown network for sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\" successfully" Jul 6 23:50:16.380718 containerd[1469]: time="2025-07-06T23:50:16.380711720Z" level=info msg="StopPodSandbox for \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\" returns successfully" Jul 6 23:50:16.382832 containerd[1469]: time="2025-07-06T23:50:16.382777837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d9k6p,Uid:3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51,Namespace:calico-system,Attempt:1,}" Jul 6 23:50:16.383726 systemd[1]: run-netns-cni\x2d0434b1f9\x2defe8\x2d5c7f\x2df81c\x2d29777cf5c9b5.mount: Deactivated successfully. Jul 6 23:50:16.512024 systemd-networkd[1393]: calid5db1cfbba5: Link UP Jul 6 23:50:16.512910 systemd-networkd[1393]: calid5db1cfbba5: Gained carrier Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.430 [INFO][4642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0 goldmane-58fd7646b9- calico-system 3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51 1072 0 2025-07-06 23:49:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-d9k6p eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid5db1cfbba5 [] [] }} ContainerID="64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9k6p" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9k6p-" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.430 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9k6p" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.466 [INFO][4657] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" HandleID="k8s-pod-network.64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.466 [INFO][4657] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" HandleID="k8s-pod-network.64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000495c20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-d9k6p", "timestamp":"2025-07-06 23:50:16.466518072 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.467 [INFO][4657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.467 [INFO][4657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.467 [INFO][4657] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.473 [INFO][4657] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" host="localhost" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.477 [INFO][4657] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.481 [INFO][4657] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.483 [INFO][4657] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.488 [INFO][4657] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.488 [INFO][4657] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" host="localhost" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.491 [INFO][4657] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.496 [INFO][4657] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" host="localhost" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.501 [INFO][4657] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" host="localhost" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.501 [INFO][4657] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" host="localhost" Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.501 [INFO][4657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:16.533141 containerd[1469]: 2025-07-06 23:50:16.501 [INFO][4657] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" HandleID="k8s-pod-network.64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:16.533783 containerd[1469]: 2025-07-06 23:50:16.505 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9k6p" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-d9k6p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid5db1cfbba5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:16.533783 containerd[1469]: 2025-07-06 23:50:16.505 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9k6p" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:16.533783 containerd[1469]: 2025-07-06 23:50:16.505 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5db1cfbba5 ContainerID="64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9k6p" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:16.533783 containerd[1469]: 2025-07-06 23:50:16.513 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9k6p" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:16.533783 containerd[1469]: 2025-07-06 23:50:16.514 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9k6p" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d", Pod:"goldmane-58fd7646b9-d9k6p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid5db1cfbba5", MAC:"86:9c:fe:23:f5:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:16.533783 containerd[1469]: 2025-07-06 23:50:16.528 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d" Namespace="calico-system" Pod="goldmane-58fd7646b9-d9k6p" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:16.557154 containerd[1469]: time="2025-07-06T23:50:16.557035045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:50:16.557154 containerd[1469]: time="2025-07-06T23:50:16.557113281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:50:16.557154 containerd[1469]: time="2025-07-06T23:50:16.557130013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:16.557356 containerd[1469]: time="2025-07-06T23:50:16.557226705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:16.587703 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL Jul 6 23:50:16.594894 systemd[1]: Started cri-containerd-64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d.scope - libcontainer container 64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d. Jul 6 23:50:16.608276 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:50:16.632243 containerd[1469]: time="2025-07-06T23:50:16.632088310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d9k6p,Uid:3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51,Namespace:calico-system,Attempt:1,} returns sandbox id \"64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d\"" Jul 6 23:50:16.714748 systemd-networkd[1393]: calif0c0403a82a: Gained IPv6LL Jul 6 23:50:16.969898 containerd[1469]: time="2025-07-06T23:50:16.969770198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:16.970586 containerd[1469]: time="2025-07-06T23:50:16.970482464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 6 23:50:16.971644 containerd[1469]: time="2025-07-06T23:50:16.971620610Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:16.974016 containerd[1469]: time="2025-07-06T23:50:16.973956122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:16.974678 containerd[1469]: time="2025-07-06T23:50:16.974651287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 2.172626929s" Jul 6 23:50:16.974732 containerd[1469]: time="2025-07-06T23:50:16.974681343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 6 23:50:16.976725 containerd[1469]: time="2025-07-06T23:50:16.976689141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:50:16.987096 containerd[1469]: time="2025-07-06T23:50:16.987067606Z" level=info msg="CreateContainer within sandbox \"6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 6 23:50:17.004391 containerd[1469]: time="2025-07-06T23:50:17.004356628Z" level=info msg="CreateContainer within sandbox \"6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"11389bf45bf55d7cb5c46845cb43d5253be510e4df99561261abf5bd3c86fc32\"" Jul 6 23:50:17.004810 containerd[1469]: time="2025-07-06T23:50:17.004776706Z" level=info msg="StartContainer for \"11389bf45bf55d7cb5c46845cb43d5253be510e4df99561261abf5bd3c86fc32\"" Jul 6 23:50:17.032663 systemd[1]: Started cri-containerd-11389bf45bf55d7cb5c46845cb43d5253be510e4df99561261abf5bd3c86fc32.scope - libcontainer container 11389bf45bf55d7cb5c46845cb43d5253be510e4df99561261abf5bd3c86fc32. Jul 6 23:50:17.077970 containerd[1469]: time="2025-07-06T23:50:17.077914380Z" level=info msg="StartContainer for \"11389bf45bf55d7cb5c46845cb43d5253be510e4df99561261abf5bd3c86fc32\" returns successfully" Jul 6 23:50:17.995793 systemd-networkd[1393]: calid5db1cfbba5: Gained IPv6LL Jul 6 23:50:19.687999 systemd[1]: run-containerd-runc-k8s.io-8fdca8578fe419e5d4cea044d878bb810e6f303b27b24e2ec5e1ffd107b0d95c-runc.V4YMgY.mount: Deactivated successfully. Jul 6 23:50:20.049787 systemd[1]: Started sshd@9-10.0.0.53:22-10.0.0.1:57432.service - OpenSSH per-connection server daemon (10.0.0.1:57432). Jul 6 23:50:20.102971 sshd[4811]: Accepted publickey for core from 10.0.0.1 port 57432 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:20.105822 sshd[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:20.113478 systemd-logind[1450]: New session 10 of user core. Jul 6 23:50:20.117673 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:50:20.256459 sshd[4811]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:20.261759 systemd[1]: sshd@9-10.0.0.53:22-10.0.0.1:57432.service: Deactivated successfully. Jul 6 23:50:20.264010 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:50:20.264830 systemd-logind[1450]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:50:20.265883 systemd-logind[1450]: Removed session 10. Jul 6 23:50:22.675077 containerd[1469]: time="2025-07-06T23:50:22.675005034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:22.727151 containerd[1469]: time="2025-07-06T23:50:22.727070999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 6 23:50:22.785594 containerd[1469]: time="2025-07-06T23:50:22.785525339Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:22.878696 containerd[1469]: time="2025-07-06T23:50:22.878632017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:22.879971 containerd[1469]: time="2025-07-06T23:50:22.879921596Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.903190155s" Jul 6 23:50:22.879971 containerd[1469]: time="2025-07-06T23:50:22.879968083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:50:22.881846 containerd[1469]: time="2025-07-06T23:50:22.881811832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 6 23:50:22.883288 containerd[1469]: time="2025-07-06T23:50:22.883230885Z" level=info msg="CreateContainer within sandbox \"967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:50:23.090923 containerd[1469]: time="2025-07-06T23:50:23.090860341Z" level=info msg="StopPodSandbox for \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\"" Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.498 [INFO][4853] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.498 [INFO][4853] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" iface="eth0" netns="/var/run/netns/cni-68174d77-8bc7-9be4-e614-932ee1ce73c6" Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.498 [INFO][4853] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" iface="eth0" netns="/var/run/netns/cni-68174d77-8bc7-9be4-e614-932ee1ce73c6" Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.498 [INFO][4853] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" iface="eth0" netns="/var/run/netns/cni-68174d77-8bc7-9be4-e614-932ee1ce73c6" Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.498 [INFO][4853] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.499 [INFO][4853] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.518 [INFO][4862] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" HandleID="k8s-pod-network.2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" Workload="localhost-k8s-csi--node--driver--dkdw8-eth0" Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.518 [INFO][4862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.518 [INFO][4862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.527 [WARNING][4862] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" HandleID="k8s-pod-network.2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" Workload="localhost-k8s-csi--node--driver--dkdw8-eth0" Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.527 [INFO][4862] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" HandleID="k8s-pod-network.2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" Workload="localhost-k8s-csi--node--driver--dkdw8-eth0" Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.529 [INFO][4862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:23.540309 containerd[1469]: 2025-07-06 23:50:23.536 [INFO][4853] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503" Jul 6 23:50:23.541501 containerd[1469]: time="2025-07-06T23:50:23.540894084Z" level=info msg="TearDown network for sandbox \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\" successfully" Jul 6 23:50:23.541501 containerd[1469]: time="2025-07-06T23:50:23.541013157Z" level=info msg="StopPodSandbox for \"2c338aa0ea754baf79a4ae49317d3d5c807bae25ddcddb07bfa18c3f5e030503\" returns successfully" Jul 6 23:50:23.542966 containerd[1469]: time="2025-07-06T23:50:23.542581119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dkdw8,Uid:e9ac6c9c-1856-41b6-91f1-74ff39eba111,Namespace:calico-system,Attempt:1,}" Jul 6 23:50:23.543074 systemd[1]: run-netns-cni\x2d68174d77\x2d8bc7\x2d9be4\x2de614\x2d932ee1ce73c6.mount: Deactivated successfully. Jul 6 23:50:24.090344 containerd[1469]: time="2025-07-06T23:50:24.090268668Z" level=info msg="StopPodSandbox for \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\"" Jul 6 23:50:24.090801 containerd[1469]: time="2025-07-06T23:50:24.090367414Z" level=info msg="StopPodSandbox for \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\"" Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.711 [INFO][4892] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.712 [INFO][4892] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" iface="eth0" netns="/var/run/netns/cni-19004b8f-77c1-c917-136e-316a8949c71b" Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.712 [INFO][4892] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" iface="eth0" netns="/var/run/netns/cni-19004b8f-77c1-c917-136e-316a8949c71b" Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.713 [INFO][4892] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" iface="eth0" netns="/var/run/netns/cni-19004b8f-77c1-c917-136e-316a8949c71b" Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.713 [INFO][4892] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.713 [INFO][4892] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.740 [INFO][4909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" HandleID="k8s-pod-network.d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" Workload="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.741 [INFO][4909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.741 [INFO][4909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.745 [WARNING][4909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" HandleID="k8s-pod-network.d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" Workload="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.746 [INFO][4909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" HandleID="k8s-pod-network.d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" Workload="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.747 [INFO][4909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:24.756056 containerd[1469]: 2025-07-06 23:50:24.750 [INFO][4892] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616" Jul 6 23:50:24.756056 containerd[1469]: time="2025-07-06T23:50:24.753305463Z" level=info msg="TearDown network for sandbox \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\" successfully" Jul 6 23:50:24.756056 containerd[1469]: time="2025-07-06T23:50:24.753336982Z" level=info msg="StopPodSandbox for \"d6ac02739ca6a4d3d51ea623dc0eac9b8d95af5fae452cf57718a0396f427616\" returns successfully" Jul 6 23:50:24.758028 kubelet[2511]: E0706 23:50:24.754948 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:24.758060 systemd[1]: run-netns-cni\x2d19004b8f\x2d77c1\x2dc917\x2d136e\x2d316a8949c71b.mount: Deactivated successfully. Jul 6 23:50:24.759561 containerd[1469]: time="2025-07-06T23:50:24.759494807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8m7k9,Uid:8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd,Namespace:kube-system,Attempt:1,}" Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.713 [INFO][4893] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.713 [INFO][4893] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" iface="eth0" netns="/var/run/netns/cni-2c10c18b-0f5e-914a-bb3d-d94d58949593" Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.714 [INFO][4893] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" iface="eth0" netns="/var/run/netns/cni-2c10c18b-0f5e-914a-bb3d-d94d58949593" Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.714 [INFO][4893] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" iface="eth0" netns="/var/run/netns/cni-2c10c18b-0f5e-914a-bb3d-d94d58949593" Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.714 [INFO][4893] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.714 [INFO][4893] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.746 [INFO][4916] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" HandleID="k8s-pod-network.2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.747 [INFO][4916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.747 [INFO][4916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.757 [WARNING][4916] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" HandleID="k8s-pod-network.2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.757 [INFO][4916] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" HandleID="k8s-pod-network.2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.766 [INFO][4916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:24.773567 containerd[1469]: 2025-07-06 23:50:24.770 [INFO][4893] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Jul 6 23:50:24.776355 systemd[1]: run-netns-cni\x2d2c10c18b\x2d0f5e\x2d914a\x2dbb3d\x2dd94d58949593.mount: Deactivated successfully. Jul 6 23:50:24.776629 containerd[1469]: time="2025-07-06T23:50:24.776463298Z" level=info msg="TearDown network for sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\" successfully" Jul 6 23:50:24.776629 containerd[1469]: time="2025-07-06T23:50:24.776510627Z" level=info msg="StopPodSandbox for \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\" returns successfully" Jul 6 23:50:24.777479 containerd[1469]: time="2025-07-06T23:50:24.777442833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865bb6f9f-jwkmx,Uid:9626945a-0af4-4eaa-ac43-a94044095a5d,Namespace:calico-apiserver,Attempt:1,}" Jul 6 23:50:24.879462 containerd[1469]: time="2025-07-06T23:50:24.879397992Z" level=info msg="CreateContainer within sandbox \"967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ae9db3c44ef66b8d2432d873a81d34685be5d3edee37c2ce5b6cb5dbc77cf298\"" Jul 6 23:50:24.880077 containerd[1469]: time="2025-07-06T23:50:24.880022188Z" level=info msg="StartContainer for \"ae9db3c44ef66b8d2432d873a81d34685be5d3edee37c2ce5b6cb5dbc77cf298\"" Jul 6 23:50:24.921716 systemd[1]: Started cri-containerd-ae9db3c44ef66b8d2432d873a81d34685be5d3edee37c2ce5b6cb5dbc77cf298.scope - libcontainer container ae9db3c44ef66b8d2432d873a81d34685be5d3edee37c2ce5b6cb5dbc77cf298. Jul 6 23:50:25.269250 systemd[1]: Started sshd@10-10.0.0.53:22-10.0.0.1:57440.service - OpenSSH per-connection server daemon (10.0.0.1:57440). Jul 6 23:50:25.305986 sshd[4968]: Accepted publickey for core from 10.0.0.1 port 57440 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:25.307958 sshd[4968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:25.312318 systemd-logind[1450]: New session 11 of user core. Jul 6 23:50:25.318695 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:50:25.596675 containerd[1469]: time="2025-07-06T23:50:25.596528044Z" level=info msg="StartContainer for \"ae9db3c44ef66b8d2432d873a81d34685be5d3edee37c2ce5b6cb5dbc77cf298\" returns successfully" Jul 6 23:50:25.679213 sshd[4968]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:25.683754 systemd[1]: sshd@10-10.0.0.53:22-10.0.0.1:57440.service: Deactivated successfully. Jul 6 23:50:25.686349 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:50:25.687104 systemd-logind[1450]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:50:25.688198 systemd-logind[1450]: Removed session 11. Jul 6 23:50:27.062996 kubelet[2511]: I0706 23:50:27.062900 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-865bb6f9f-bcvjq" podStartSLOduration=36.058629275 podStartE2EDuration="44.062874962s" podCreationTimestamp="2025-07-06 23:49:43 +0000 UTC" firstStartedPulling="2025-07-06 23:50:14.876664944 +0000 UTC m=+47.863991223" lastFinishedPulling="2025-07-06 23:50:22.880910631 +0000 UTC m=+55.868236910" observedRunningTime="2025-07-06 23:50:27.062678742 +0000 UTC m=+60.050005042" watchObservedRunningTime="2025-07-06 23:50:27.062874962 +0000 UTC m=+60.050201241" Jul 6 23:50:27.090025 containerd[1469]: time="2025-07-06T23:50:27.089972468Z" level=info msg="StopPodSandbox for \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\"" Jul 6 23:50:27.090549 containerd[1469]: time="2025-07-06T23:50:27.090492695Z" level=info msg="StopPodSandbox for \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\"" Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.309 [WARNING][5014] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" WorkloadEndpoint="localhost-k8s-whisker--556fc4889b--95bjd-eth0" Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.310 [INFO][5014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.310 [INFO][5014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" iface="eth0" netns="" Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.310 [INFO][5014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.310 [INFO][5014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.354 [INFO][5033] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" HandleID="k8s-pod-network.758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Workload="localhost-k8s-whisker--556fc4889b--95bjd-eth0" Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.354 [INFO][5033] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.354 [INFO][5033] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.540 [WARNING][5033] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" HandleID="k8s-pod-network.758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Workload="localhost-k8s-whisker--556fc4889b--95bjd-eth0" Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.541 [INFO][5033] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" HandleID="k8s-pod-network.758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Workload="localhost-k8s-whisker--556fc4889b--95bjd-eth0" Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.542 [INFO][5033] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:27.550573 containerd[1469]: 2025-07-06 23:50:27.547 [INFO][5014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:27.551081 containerd[1469]: time="2025-07-06T23:50:27.550618135Z" level=info msg="TearDown network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\" successfully" Jul 6 23:50:27.551081 containerd[1469]: time="2025-07-06T23:50:27.550646369Z" level=info msg="StopPodSandbox for \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\" returns successfully" Jul 6 23:50:27.551583 containerd[1469]: time="2025-07-06T23:50:27.551523336Z" level=info msg="RemovePodSandbox for \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\"" Jul 6 23:50:27.554211 containerd[1469]: time="2025-07-06T23:50:27.554163296Z" level=info msg="Forcibly stopping sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\"" Jul 6 23:50:27.606909 kubelet[2511]: I0706 23:50:27.606872 2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.321 [INFO][5015] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.322 [INFO][5015] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" iface="eth0" netns="/var/run/netns/cni-9f869086-37e0-4dd9-0456-6c8a5eec8b05" Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.323 [INFO][5015] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" iface="eth0" netns="/var/run/netns/cni-9f869086-37e0-4dd9-0456-6c8a5eec8b05" Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.327 [INFO][5015] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" iface="eth0" netns="/var/run/netns/cni-9f869086-37e0-4dd9-0456-6c8a5eec8b05" Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.327 [INFO][5015] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.327 [INFO][5015] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.359 [INFO][5052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" HandleID="k8s-pod-network.1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" Workload="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.359 [INFO][5052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.542 [INFO][5052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.589 [WARNING][5052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" HandleID="k8s-pod-network.1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" Workload="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.589 [INFO][5052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" HandleID="k8s-pod-network.1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" Workload="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.752 [INFO][5052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:27.758475 containerd[1469]: 2025-07-06 23:50:27.756 [INFO][5015] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77" Jul 6 23:50:27.759127 containerd[1469]: time="2025-07-06T23:50:27.758968356Z" level=info msg="TearDown network for sandbox \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\" successfully" Jul 6 23:50:27.759127 containerd[1469]: time="2025-07-06T23:50:27.759003945Z" level=info msg="StopPodSandbox for \"1bc4e4f8f15f30c02ff676aeb1198115fa51bf388a23b228c46a16e30bb95f77\" returns successfully" Jul 6 23:50:27.759486 kubelet[2511]: E0706 23:50:27.759459 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:27.760904 containerd[1469]: time="2025-07-06T23:50:27.759979363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xhnqf,Uid:bf7af7df-e789-4a3b-b647-3ff2fb52d715,Namespace:kube-system,Attempt:1,}" Jul 6 23:50:28.054625 containerd[1469]: time="2025-07-06T23:50:28.054569037Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:28.121851 systemd[1]: run-netns-cni\x2d9f869086\x2d37e0\x2d4dd9\x2d0456\x2d6c8a5eec8b05.mount: Deactivated successfully. Jul 6 23:50:28.335353 containerd[1469]: time="2025-07-06T23:50:28.335132896Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 6 23:50:28.337832 containerd[1469]: time="2025-07-06T23:50:28.337784594Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.45594s" Jul 6 23:50:28.337886 containerd[1469]: time="2025-07-06T23:50:28.337833118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 6 23:50:28.339491 containerd[1469]: time="2025-07-06T23:50:28.339467809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 6 23:50:28.340648 containerd[1469]: time="2025-07-06T23:50:28.340607192Z" level=info msg="CreateContainer within sandbox \"de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:50:28.568263 systemd-networkd[1393]: cali586fec7735f: Link UP Jul 6 23:50:28.568492 systemd-networkd[1393]: cali586fec7735f: Gained carrier Jul 6 23:50:28.608932 kubelet[2511]: I0706 23:50:28.608777 2511 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:27.546 [INFO][5039] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--dkdw8-eth0 csi-node-driver- calico-system e9ac6c9c-1856-41b6-91f1-74ff39eba111 1111 0 2025-07-06 23:49:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-dkdw8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali586fec7735f [] [] }} ContainerID="2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" Namespace="calico-system" Pod="csi-node-driver-dkdw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkdw8-" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:27.546 [INFO][5039] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" Namespace="calico-system" Pod="csi-node-driver-dkdw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkdw8-eth0" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:27.777 [INFO][5099] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" HandleID="k8s-pod-network.2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" Workload="localhost-k8s-csi--node--driver--dkdw8-eth0" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:27.778 [INFO][5099] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" HandleID="k8s-pod-network.2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" Workload="localhost-k8s-csi--node--driver--dkdw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003280e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-dkdw8", "timestamp":"2025-07-06 23:50:27.777562264 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:27.778 [INFO][5099] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:27.778 [INFO][5099] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:27.778 [INFO][5099] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:27.817 [INFO][5099] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" host="localhost" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:28.049 [INFO][5099] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:28.421 [INFO][5099] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:28.424 [INFO][5099] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:28.446 [INFO][5099] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:28.446 [INFO][5099] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" host="localhost" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:28.463 [INFO][5099] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3 Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:28.500 [INFO][5099] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" host="localhost" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:28.561 [INFO][5099] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" host="localhost" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:28.561 [INFO][5099] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" host="localhost" Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:28.561 [INFO][5099] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:28.730895 containerd[1469]: 2025-07-06 23:50:28.561 [INFO][5099] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" HandleID="k8s-pod-network.2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" Workload="localhost-k8s-csi--node--driver--dkdw8-eth0" Jul 6 23:50:28.731903 containerd[1469]: 2025-07-06 23:50:28.564 [INFO][5039] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" Namespace="calico-system" Pod="csi-node-driver-dkdw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkdw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dkdw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9ac6c9c-1856-41b6-91f1-74ff39eba111", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-dkdw8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali586fec7735f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:28.731903 containerd[1469]: 2025-07-06 23:50:28.565 [INFO][5039] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" Namespace="calico-system" Pod="csi-node-driver-dkdw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkdw8-eth0" Jul 6 23:50:28.731903 containerd[1469]: 2025-07-06 23:50:28.565 [INFO][5039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali586fec7735f ContainerID="2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" Namespace="calico-system" Pod="csi-node-driver-dkdw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkdw8-eth0" Jul 6 23:50:28.731903 containerd[1469]: 2025-07-06 23:50:28.569 [INFO][5039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" Namespace="calico-system" Pod="csi-node-driver-dkdw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkdw8-eth0" Jul 6 23:50:28.731903 containerd[1469]: 2025-07-06 23:50:28.570 [INFO][5039] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" Namespace="calico-system" Pod="csi-node-driver-dkdw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkdw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--dkdw8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e9ac6c9c-1856-41b6-91f1-74ff39eba111", ResourceVersion:"1111", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3", Pod:"csi-node-driver-dkdw8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali586fec7735f", MAC:"ba:59:42:f3:1b:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:28.731903 containerd[1469]: 2025-07-06 23:50:28.723 [INFO][5039] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3" Namespace="calico-system" Pod="csi-node-driver-dkdw8" WorkloadEndpoint="localhost-k8s-csi--node--driver--dkdw8-eth0" Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:27.816 [WARNING][5075] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" WorkloadEndpoint="localhost-k8s-whisker--556fc4889b--95bjd-eth0" Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:27.816 [INFO][5075] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:27.816 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" iface="eth0" netns="" Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:27.816 [INFO][5075] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:27.816 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:27.845 [INFO][5108] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" HandleID="k8s-pod-network.758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Workload="localhost-k8s-whisker--556fc4889b--95bjd-eth0" Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:27.845 [INFO][5108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:28.561 [INFO][5108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:28.580 [WARNING][5108] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" HandleID="k8s-pod-network.758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Workload="localhost-k8s-whisker--556fc4889b--95bjd-eth0" Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:28.581 [INFO][5108] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" HandleID="k8s-pod-network.758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Workload="localhost-k8s-whisker--556fc4889b--95bjd-eth0" Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:28.724 [INFO][5108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:28.736367 containerd[1469]: 2025-07-06 23:50:28.728 [INFO][5075] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93" Jul 6 23:50:28.736367 containerd[1469]: time="2025-07-06T23:50:28.734680765Z" level=info msg="TearDown network for sandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\" successfully" Jul 6 23:50:28.984168 containerd[1469]: time="2025-07-06T23:50:28.983329178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:50:28.984321 containerd[1469]: time="2025-07-06T23:50:28.984065190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:50:28.984321 containerd[1469]: time="2025-07-06T23:50:28.984080400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:28.984321 containerd[1469]: time="2025-07-06T23:50:28.984271279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:29.011150 systemd[1]: Started cri-containerd-2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3.scope - libcontainer container 2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3. Jul 6 23:50:29.024611 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:50:29.037718 containerd[1469]: time="2025-07-06T23:50:29.037663248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dkdw8,Uid:e9ac6c9c-1856-41b6-91f1-74ff39eba111,Namespace:calico-system,Attempt:1,} returns sandbox id \"2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3\"" Jul 6 23:50:29.506477 systemd-networkd[1393]: calic6a98d344c6: Link UP Jul 6 23:50:29.506808 systemd-networkd[1393]: calic6a98d344c6: Gained carrier Jul 6 23:50:29.642779 systemd-networkd[1393]: cali586fec7735f: Gained IPv6LL Jul 6 23:50:29.690310 containerd[1469]: time="2025-07-06T23:50:29.690237040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:50:29.691287 containerd[1469]: time="2025-07-06T23:50:29.691107141Z" level=info msg="RemovePodSandbox \"758f597e0f0bd20ea9ecf3f5c6d1c44b2111fad6f713d0c9f6d4b1d585b45a93\" returns successfully" Jul 6 23:50:29.692177 containerd[1469]: time="2025-07-06T23:50:29.692040054Z" level=info msg="StopPodSandbox for \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\"" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.070 [INFO][5083] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0 coredns-7c65d6cfc9- kube-system 8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd 1117 0 2025-07-06 23:49:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-8m7k9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic6a98d344c6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8m7k9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8m7k9-" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.070 [INFO][5083] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8m7k9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.451 [INFO][5136] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" HandleID="k8s-pod-network.d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" Workload="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.451 [INFO][5136] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" HandleID="k8s-pod-network.d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" Workload="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135eb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-8m7k9", "timestamp":"2025-07-06 23:50:28.451448084 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.451 [INFO][5136] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.724 [INFO][5136] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.724 [INFO][5136] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.742 [INFO][5136] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" host="localhost" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.776 [INFO][5136] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.945 [INFO][5136] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.948 [INFO][5136] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.950 [INFO][5136] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.950 [INFO][5136] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" host="localhost" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:28.991 [INFO][5136] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3 Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:29.264 [INFO][5136] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" host="localhost" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:29.500 [INFO][5136] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" host="localhost" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:29.500 [INFO][5136] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" host="localhost" Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:29.500 [INFO][5136] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:29.692972 containerd[1469]: 2025-07-06 23:50:29.500 [INFO][5136] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" HandleID="k8s-pod-network.d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" Workload="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" Jul 6 23:50:29.693597 containerd[1469]: 2025-07-06 23:50:29.503 [INFO][5083] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8m7k9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-8m7k9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6a98d344c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:29.693597 containerd[1469]: 2025-07-06 23:50:29.503 [INFO][5083] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8m7k9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" Jul 6 23:50:29.693597 containerd[1469]: 2025-07-06 23:50:29.504 [INFO][5083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6a98d344c6 ContainerID="d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8m7k9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" Jul 6 23:50:29.693597 containerd[1469]: 2025-07-06 23:50:29.507 [INFO][5083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8m7k9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" Jul 6 23:50:29.693597 containerd[1469]: 2025-07-06 23:50:29.508 [INFO][5083] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8m7k9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3", Pod:"coredns-7c65d6cfc9-8m7k9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic6a98d344c6", MAC:"ea:0a:cb:0f:77:ce", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:29.693597 containerd[1469]: 2025-07-06 23:50:29.685 [INFO][5083] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-8m7k9" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--8m7k9-eth0" Jul 6 23:50:29.708210 containerd[1469]: time="2025-07-06T23:50:29.707590459Z" level=info msg="CreateContainer within sandbox \"de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"754b085fde03305c74140a9ace9130e00e9fd88f74859c1bd3e5cd08536feb91\"" Jul 6 23:50:29.709748 containerd[1469]: time="2025-07-06T23:50:29.708588397Z" level=info msg="StartContainer for \"754b085fde03305c74140a9ace9130e00e9fd88f74859c1bd3e5cd08536feb91\"" Jul 6 23:50:29.751109 containerd[1469]: time="2025-07-06T23:50:29.750999253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:50:29.751640 containerd[1469]: time="2025-07-06T23:50:29.751127140Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:50:29.755619 containerd[1469]: time="2025-07-06T23:50:29.754799636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:29.755619 containerd[1469]: time="2025-07-06T23:50:29.754965677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:29.756733 systemd[1]: Started cri-containerd-754b085fde03305c74140a9ace9130e00e9fd88f74859c1bd3e5cd08536feb91.scope - libcontainer container 754b085fde03305c74140a9ace9130e00e9fd88f74859c1bd3e5cd08536feb91. Jul 6 23:50:29.773842 systemd-networkd[1393]: cali7a907f6155a: Link UP Jul 6 23:50:29.775945 systemd-networkd[1393]: cali7a907f6155a: Gained carrier Jul 6 23:50:29.789699 systemd[1]: Started cri-containerd-d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3.scope - libcontainer container d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3. Jul 6 23:50:29.802628 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:50:29.867672 containerd[1469]: time="2025-07-06T23:50:29.867603792Z" level=info msg="StartContainer for \"754b085fde03305c74140a9ace9130e00e9fd88f74859c1bd3e5cd08536feb91\" returns successfully" Jul 6 23:50:29.867819 containerd[1469]: time="2025-07-06T23:50:29.867716220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8m7k9,Uid:8f2c4407-fa06-42c0-b2df-cdbd60e8d1cd,Namespace:kube-system,Attempt:1,} returns sandbox id \"d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3\"" Jul 6 23:50:29.868918 kubelet[2511]: E0706 23:50:29.868855 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:28.051 [INFO][5109] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0 calico-apiserver-865bb6f9f- calico-apiserver 9626945a-0af4-4eaa-ac43-a94044095a5d 1118 0 2025-07-06 23:49:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:865bb6f9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-865bb6f9f-jwkmx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7a907f6155a [] [] }} ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-jwkmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:28.052 [INFO][5109] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-jwkmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:28.453 [INFO][5135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" HandleID="k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:28.454 [INFO][5135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" HandleID="k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-865bb6f9f-jwkmx", "timestamp":"2025-07-06 23:50:28.453501766 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:28.454 [INFO][5135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.500 [INFO][5135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.500 [INFO][5135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.687 [INFO][5135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" host="localhost" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.700 [INFO][5135] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.712 [INFO][5135] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.717 [INFO][5135] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.720 [INFO][5135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.720 [INFO][5135] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" host="localhost" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.722 [INFO][5135] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23 Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.732 [INFO][5135] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" host="localhost" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.758 [INFO][5135] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" host="localhost" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.758 [INFO][5135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" host="localhost" Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.761 [INFO][5135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:29.889551 containerd[1469]: 2025-07-06 23:50:29.761 [INFO][5135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" HandleID="k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:29.890135 containerd[1469]: 2025-07-06 23:50:29.766 [INFO][5109] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-jwkmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0", GenerateName:"calico-apiserver-865bb6f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9626945a-0af4-4eaa-ac43-a94044095a5d", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865bb6f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-865bb6f9f-jwkmx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a907f6155a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:29.890135 containerd[1469]: 2025-07-06 23:50:29.767 [INFO][5109] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-jwkmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:29.890135 containerd[1469]: 2025-07-06 23:50:29.767 [INFO][5109] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a907f6155a ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-jwkmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:29.890135 containerd[1469]: 2025-07-06 23:50:29.777 [INFO][5109] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-jwkmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:29.890135 containerd[1469]: 2025-07-06 23:50:29.778 [INFO][5109] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-jwkmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0", GenerateName:"calico-apiserver-865bb6f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9626945a-0af4-4eaa-ac43-a94044095a5d", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865bb6f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23", Pod:"calico-apiserver-865bb6f9f-jwkmx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a907f6155a", MAC:"4a:ca:11:1a:5d:1c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:29.890135 containerd[1469]: 2025-07-06 23:50:29.874 [INFO][5109] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Namespace="calico-apiserver" Pod="calico-apiserver-865bb6f9f-jwkmx" WorkloadEndpoint="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:29.890737 containerd[1469]: time="2025-07-06T23:50:29.889330152Z" level=info msg="CreateContainer within sandbox \"d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:50:29.921205 containerd[1469]: time="2025-07-06T23:50:29.920869230Z" level=info msg="CreateContainer within sandbox \"d9b19d0f4ce39ba5e0771cf9d4582075686a811323436b1975d1467245d8f6c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f5997dc7c9b60615a132c705aa2ce8aa6389f0d9fcc0160ad9f2d8541cbadda0\"" Jul 6 23:50:29.923216 containerd[1469]: time="2025-07-06T23:50:29.923163903Z" level=info msg="StartContainer for \"f5997dc7c9b60615a132c705aa2ce8aa6389f0d9fcc0160ad9f2d8541cbadda0\"" Jul 6 23:50:29.934152 containerd[1469]: time="2025-07-06T23:50:29.933862455Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:50:29.934152 containerd[1469]: time="2025-07-06T23:50:29.933925096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:50:29.934152 containerd[1469]: time="2025-07-06T23:50:29.933936177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:29.934152 containerd[1469]: time="2025-07-06T23:50:29.934033706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:29.952093 systemd-networkd[1393]: cali6ea5275d57f: Link UP Jul 6 23:50:29.954211 systemd-networkd[1393]: cali6ea5275d57f: Gained carrier Jul 6 23:50:29.968699 systemd[1]: Started cri-containerd-4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23.scope - libcontainer container 4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23. Jul 6 23:50:29.971975 systemd[1]: Started cri-containerd-f5997dc7c9b60615a132c705aa2ce8aa6389f0d9fcc0160ad9f2d8541cbadda0.scope - libcontainer container f5997dc7c9b60615a132c705aa2ce8aa6389f0d9fcc0160ad9f2d8541cbadda0. Jul 6 23:50:29.990242 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:50:30.015249 containerd[1469]: time="2025-07-06T23:50:30.015058156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-865bb6f9f-jwkmx,Uid:9626945a-0af4-4eaa-ac43-a94044095a5d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23\"" Jul 6 23:50:30.019569 containerd[1469]: time="2025-07-06T23:50:30.019444464Z" level=info msg="CreateContainer within sandbox \"4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:29.871 [WARNING][5244] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0", GenerateName:"calico-kube-controllers-7c44cf5b79-", Namespace:"calico-system", SelfLink:"", UID:"cd9fb964-0fb8-4877-9487-51dc490180f3", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c44cf5b79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b", Pod:"calico-kube-controllers-7c44cf5b79-2q9kp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali634b6b6742e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:29.872 [INFO][5244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:29.873 [INFO][5244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" iface="eth0" netns="" Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:29.873 [INFO][5244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:29.876 [INFO][5244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:29.917 [INFO][5341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" HandleID="k8s-pod-network.18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:29.917 [INFO][5341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:29.932 [INFO][5341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:29.953 [WARNING][5341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" HandleID="k8s-pod-network.18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:29.954 [INFO][5341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" HandleID="k8s-pod-network.18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:30.248 [INFO][5341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:30.253314 containerd[1469]: 2025-07-06 23:50:30.250 [INFO][5244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:30.253314 containerd[1469]: time="2025-07-06T23:50:30.253147202Z" level=info msg="TearDown network for sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\" successfully" Jul 6 23:50:30.253314 containerd[1469]: time="2025-07-06T23:50:30.253184454Z" level=info msg="StopPodSandbox for \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\" returns successfully" Jul 6 23:50:30.253775 containerd[1469]: time="2025-07-06T23:50:30.253655704Z" level=info msg="RemovePodSandbox for \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\"" Jul 6 23:50:30.253775 containerd[1469]: time="2025-07-06T23:50:30.253683597Z" level=info msg="Forcibly stopping sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\"" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:28.946 [INFO][5161] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0 coredns-7c65d6cfc9- kube-system bf7af7df-e789-4a3b-b647-3ff2fb52d715 1133 0 2025-07-06 23:49:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-xhnqf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6ea5275d57f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xhnqf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xhnqf-" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:28.946 [INFO][5161] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xhnqf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.016 [INFO][5202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" HandleID="k8s-pod-network.21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" Workload="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.016 [INFO][5202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" HandleID="k8s-pod-network.21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" Workload="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c72d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-xhnqf", "timestamp":"2025-07-06 23:50:29.016435773 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.016 [INFO][5202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.759 [INFO][5202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.759 [INFO][5202] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.877 [INFO][5202] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" host="localhost" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.884 [INFO][5202] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.898 [INFO][5202] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.903 [INFO][5202] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.907 [INFO][5202] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.908 [INFO][5202] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" host="localhost" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.910 [INFO][5202] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5 Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.917 [INFO][5202] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" host="localhost" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.932 [INFO][5202] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" host="localhost" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.932 [INFO][5202] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" host="localhost" Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.932 [INFO][5202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:30.291065 containerd[1469]: 2025-07-06 23:50:29.932 [INFO][5202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" HandleID="k8s-pod-network.21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" Workload="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" Jul 6 23:50:30.291590 containerd[1469]: 2025-07-06 23:50:29.937 [INFO][5161] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xhnqf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bf7af7df-e789-4a3b-b647-3ff2fb52d715", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-xhnqf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ea5275d57f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:30.291590 containerd[1469]: 2025-07-06 23:50:29.942 [INFO][5161] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xhnqf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" Jul 6 23:50:30.291590 containerd[1469]: 2025-07-06 23:50:29.942 [INFO][5161] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ea5275d57f ContainerID="21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xhnqf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" Jul 6 23:50:30.291590 containerd[1469]: 2025-07-06 23:50:29.955 [INFO][5161] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xhnqf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" Jul 6 23:50:30.291590 containerd[1469]: 2025-07-06 23:50:29.955 [INFO][5161] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xhnqf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"bf7af7df-e789-4a3b-b647-3ff2fb52d715", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5", Pod:"coredns-7c65d6cfc9-xhnqf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6ea5275d57f", MAC:"02:2d:27:3e:a0:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:30.291590 containerd[1469]: 2025-07-06 23:50:30.285 [INFO][5161] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-xhnqf" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--xhnqf-eth0" Jul 6 23:50:30.334065 containerd[1469]: time="2025-07-06T23:50:30.333939690Z" level=info msg="StartContainer for \"f5997dc7c9b60615a132c705aa2ce8aa6389f0d9fcc0160ad9f2d8541cbadda0\" returns successfully" Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.317 [WARNING][5444] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0", GenerateName:"calico-kube-controllers-7c44cf5b79-", Namespace:"calico-system", SelfLink:"", UID:"cd9fb964-0fb8-4877-9487-51dc490180f3", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c44cf5b79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b", Pod:"calico-kube-controllers-7c44cf5b79-2q9kp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali634b6b6742e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.317 [INFO][5444] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.317 [INFO][5444] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" iface="eth0" netns="" Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.317 [INFO][5444] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.317 [INFO][5444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.341 [INFO][5462] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" HandleID="k8s-pod-network.18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.342 [INFO][5462] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.342 [INFO][5462] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.416 [WARNING][5462] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" HandleID="k8s-pod-network.18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.416 [INFO][5462] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" HandleID="k8s-pod-network.18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Workload="localhost-k8s-calico--kube--controllers--7c44cf5b79--2q9kp-eth0" Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.418 [INFO][5462] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:30.428737 containerd[1469]: 2025-07-06 23:50:30.422 [INFO][5444] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a" Jul 6 23:50:30.429515 containerd[1469]: time="2025-07-06T23:50:30.428795720Z" level=info msg="TearDown network for sandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\" successfully" Jul 6 23:50:30.492100 containerd[1469]: time="2025-07-06T23:50:30.491961287Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:50:30.492100 containerd[1469]: time="2025-07-06T23:50:30.492052394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:50:30.492100 containerd[1469]: time="2025-07-06T23:50:30.492065338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:30.492399 containerd[1469]: time="2025-07-06T23:50:30.492183627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:50:30.526782 systemd[1]: Started cri-containerd-21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5.scope - libcontainer container 21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5. Jul 6 23:50:30.541279 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:50:30.572713 containerd[1469]: time="2025-07-06T23:50:30.572664733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-xhnqf,Uid:bf7af7df-e789-4a3b-b647-3ff2fb52d715,Namespace:kube-system,Attempt:1,} returns sandbox id \"21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5\"" Jul 6 23:50:30.573629 kubelet[2511]: E0706 23:50:30.573589 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:30.575846 containerd[1469]: time="2025-07-06T23:50:30.575811860Z" level=info msg="CreateContainer within sandbox \"21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:50:30.608509 containerd[1469]: time="2025-07-06T23:50:30.608426335Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:50:30.608601 containerd[1469]: time="2025-07-06T23:50:30.608509836Z" level=info msg="RemovePodSandbox \"18bee057940035128bb80908d651c1551578d0bdd9d3521a73a0068a90ca6f8a\" returns successfully" Jul 6 23:50:30.608929 containerd[1469]: time="2025-07-06T23:50:30.608890731Z" level=info msg="StopPodSandbox for \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\"" Jul 6 23:50:30.632526 containerd[1469]: time="2025-07-06T23:50:30.632397053Z" level=info msg="CreateContainer within sandbox \"4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\"" Jul 6 23:50:30.640701 kubelet[2511]: I0706 23:50:30.640634 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-66947d49bf-bxk5j" podStartSLOduration=32.244283378 podStartE2EDuration="45.640611402s" podCreationTimestamp="2025-07-06 23:49:45 +0000 UTC" firstStartedPulling="2025-07-06 23:50:14.942258762 +0000 UTC m=+47.929585041" lastFinishedPulling="2025-07-06 23:50:28.338586786 +0000 UTC m=+61.325913065" observedRunningTime="2025-07-06 23:50:30.637999709 +0000 UTC m=+63.625325989" watchObservedRunningTime="2025-07-06 23:50:30.640611402 +0000 UTC m=+63.627937681" Jul 6 23:50:30.643622 containerd[1469]: time="2025-07-06T23:50:30.643583169Z" level=info msg="StartContainer for \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\"" Jul 6 23:50:30.649619 kubelet[2511]: E0706 23:50:30.646883 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:30.692155 systemd[1]: Started sshd@11-10.0.0.53:22-10.0.0.1:56132.service - OpenSSH per-connection server daemon (10.0.0.1:56132). Jul 6 23:50:30.724674 containerd[1469]: time="2025-07-06T23:50:30.722694972Z" level=info msg="CreateContainer within sandbox \"21a2ca6eb4cfcaa7e6cf0f08fd03910cc5c714543e314b35ddf73de26f4599e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2aff57cf2a9f179ec56625350a497f08d0cec458ff4fb5c50b99d224d1052e1\"" Jul 6 23:50:30.727082 systemd[1]: Started cri-containerd-3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a.scope - libcontainer container 3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a. Jul 6 23:50:30.729848 containerd[1469]: time="2025-07-06T23:50:30.727780772Z" level=info msg="StartContainer for \"d2aff57cf2a9f179ec56625350a497f08d0cec458ff4fb5c50b99d224d1052e1\"" Jul 6 23:50:30.752795 kubelet[2511]: I0706 23:50:30.752415 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8m7k9" podStartSLOduration=57.752395012 podStartE2EDuration="57.752395012s" podCreationTimestamp="2025-07-06 23:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:50:30.732611999 +0000 UTC m=+63.719938298" watchObservedRunningTime="2025-07-06 23:50:30.752395012 +0000 UTC m=+63.739721291" Jul 6 23:50:30.779390 sshd[5549]: Accepted publickey for core from 10.0.0.1 port 56132 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:30.781211 sshd[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:30.795525 systemd-networkd[1393]: calic6a98d344c6: Gained IPv6LL Jul 6 23:50:30.797735 systemd[1]: Started cri-containerd-d2aff57cf2a9f179ec56625350a497f08d0cec458ff4fb5c50b99d224d1052e1.scope - libcontainer container d2aff57cf2a9f179ec56625350a497f08d0cec458ff4fb5c50b99d224d1052e1. Jul 6 23:50:30.804386 systemd-logind[1450]: New session 12 of user core. Jul 6 23:50:30.810131 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.724 [WARNING][5523] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0", GenerateName:"calico-apiserver-66947d49bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b3a3ca4-77bc-49c7-8b23-f798452500a5", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66947d49bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050", Pod:"calico-apiserver-66947d49bf-bxk5j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0c0403a82a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.724 [INFO][5523] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.724 [INFO][5523] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" iface="eth0" netns="" Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.724 [INFO][5523] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.724 [INFO][5523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.806 [INFO][5551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" HandleID="k8s-pod-network.1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.806 [INFO][5551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.806 [INFO][5551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.811 [WARNING][5551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" HandleID="k8s-pod-network.1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.811 [INFO][5551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" HandleID="k8s-pod-network.1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.814 [INFO][5551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:30.828884 containerd[1469]: 2025-07-06 23:50:30.823 [INFO][5523] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:30.829470 containerd[1469]: time="2025-07-06T23:50:30.828910522Z" level=info msg="TearDown network for sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\" successfully" Jul 6 23:50:30.829470 containerd[1469]: time="2025-07-06T23:50:30.828934279Z" level=info msg="StopPodSandbox for \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\" returns successfully" Jul 6 23:50:30.830801 containerd[1469]: time="2025-07-06T23:50:30.829890394Z" level=info msg="RemovePodSandbox for \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\"" Jul 6 23:50:30.830801 containerd[1469]: time="2025-07-06T23:50:30.829958376Z" level=info msg="Forcibly stopping sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\"" Jul 6 23:50:30.868833 containerd[1469]: time="2025-07-06T23:50:30.868736451Z" level=info msg="StartContainer for \"d2aff57cf2a9f179ec56625350a497f08d0cec458ff4fb5c50b99d224d1052e1\" returns successfully" Jul 6 23:50:30.869086 containerd[1469]: time="2025-07-06T23:50:30.868781548Z" level=info msg="StartContainer for \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\" returns successfully" Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:30.890 [WARNING][5612] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0", GenerateName:"calico-apiserver-66947d49bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"2b3a3ca4-77bc-49c7-8b23-f798452500a5", ResourceVersion:"1175", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66947d49bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de0eaaba0fcc5229b6de454f88b9ebd3edca055a682b8ff2df3f573715d9a050", Pod:"calico-apiserver-66947d49bf-bxk5j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif0c0403a82a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:30.890 [INFO][5612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:30.890 [INFO][5612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" iface="eth0" netns="" Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:30.890 [INFO][5612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:30.890 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:30.963 [INFO][5646] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" HandleID="k8s-pod-network.1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:30.978 [INFO][5646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:30.979 [INFO][5646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:30.984 [WARNING][5646] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" HandleID="k8s-pod-network.1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:30.984 [INFO][5646] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" HandleID="k8s-pod-network.1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Workload="localhost-k8s-calico--apiserver--66947d49bf--bxk5j-eth0" Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:31.012 [INFO][5646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:31.019041 containerd[1469]: 2025-07-06 23:50:31.016 [INFO][5612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9" Jul 6 23:50:31.019686 containerd[1469]: time="2025-07-06T23:50:31.019633424Z" level=info msg="TearDown network for sandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\" successfully" Jul 6 23:50:31.209558 containerd[1469]: time="2025-07-06T23:50:31.209384240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:50:31.210064 containerd[1469]: time="2025-07-06T23:50:31.209730528Z" level=info msg="RemovePodSandbox \"1268a0265f91a1dfc7114dbe0fbd88c66423595450bce6d8e1c54265b3a5a0e9\" returns successfully" Jul 6 23:50:31.210591 containerd[1469]: time="2025-07-06T23:50:31.210292963Z" level=info msg="StopPodSandbox for \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\"" Jul 6 23:50:31.240197 sshd[5549]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:31.250313 systemd[1]: sshd@11-10.0.0.53:22-10.0.0.1:56132.service: Deactivated successfully. Jul 6 23:50:31.252091 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:50:31.254009 systemd-logind[1450]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:50:31.264407 systemd[1]: Started sshd@12-10.0.0.53:22-10.0.0.1:56138.service - OpenSSH per-connection server daemon (10.0.0.1:56138). Jul 6 23:50:31.265967 systemd-logind[1450]: Removed session 12. Jul 6 23:50:31.294559 sshd[5684]: Accepted publickey for core from 10.0.0.1 port 56138 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:31.294938 sshd[5684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:31.299411 systemd-logind[1450]: New session 13 of user core. Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.261 [WARNING][5673] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d", Pod:"goldmane-58fd7646b9-d9k6p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid5db1cfbba5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.261 [INFO][5673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.261 [INFO][5673] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" iface="eth0" netns="" Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.261 [INFO][5673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.261 [INFO][5673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.287 [INFO][5686] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" HandleID="k8s-pod-network.f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.287 [INFO][5686] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.287 [INFO][5686] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.294 [WARNING][5686] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" HandleID="k8s-pod-network.f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.294 [INFO][5686] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" HandleID="k8s-pod-network.f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.295 [INFO][5686] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:31.304529 containerd[1469]: 2025-07-06 23:50:31.299 [INFO][5673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:31.304529 containerd[1469]: time="2025-07-06T23:50:31.304414653Z" level=info msg="TearDown network for sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\" successfully" Jul 6 23:50:31.304529 containerd[1469]: time="2025-07-06T23:50:31.304440141Z" level=info msg="StopPodSandbox for \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\" returns successfully" Jul 6 23:50:31.305438 containerd[1469]: time="2025-07-06T23:50:31.305405433Z" level=info msg="RemovePodSandbox for \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\"" Jul 6 23:50:31.305490 containerd[1469]: time="2025-07-06T23:50:31.305443386Z" level=info msg="Forcibly stopping sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\"" Jul 6 23:50:31.306908 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:50:31.370777 systemd-networkd[1393]: cali6ea5275d57f: Gained IPv6LL Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.347 [WARNING][5705] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"3e5b7cb8-7d1b-4cad-a3a9-00603e8b2e51", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d", Pod:"goldmane-58fd7646b9-d9k6p", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid5db1cfbba5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.347 [INFO][5705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.347 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" iface="eth0" netns="" Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.348 [INFO][5705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.348 [INFO][5705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.384 [INFO][5715] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" HandleID="k8s-pod-network.f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.385 [INFO][5715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.385 [INFO][5715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.390 [WARNING][5715] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" HandleID="k8s-pod-network.f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.392 [INFO][5715] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" HandleID="k8s-pod-network.f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Workload="localhost-k8s-goldmane--58fd7646b9--d9k6p-eth0" Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.395 [INFO][5715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:31.401576 containerd[1469]: 2025-07-06 23:50:31.398 [INFO][5705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35" Jul 6 23:50:31.401576 containerd[1469]: time="2025-07-06T23:50:31.401141907Z" level=info msg="TearDown network for sandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\" successfully" Jul 6 23:50:31.409446 containerd[1469]: time="2025-07-06T23:50:31.407566000Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:50:31.409446 containerd[1469]: time="2025-07-06T23:50:31.407641986Z" level=info msg="RemovePodSandbox \"f06898008fd9c17d8552cfcaec727e0fe884451779e6ef1492af3c7ca77d8c35\" returns successfully" Jul 6 23:50:31.409446 containerd[1469]: time="2025-07-06T23:50:31.408234029Z" level=info msg="StopPodSandbox for \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\"" Jul 6 23:50:31.435291 systemd-networkd[1393]: cali7a907f6155a: Gained IPv6LL Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.457 [WARNING][5738] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0", GenerateName:"calico-apiserver-865bb6f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6baffdc-d693-4f2e-98c3-45c2d2376ca7", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865bb6f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a", Pod:"calico-apiserver-865bb6f9f-bcvjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali119b2ad589c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.458 [INFO][5738] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.458 [INFO][5738] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" iface="eth0" netns="" Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.458 [INFO][5738] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.458 [INFO][5738] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.490 [INFO][5746] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" HandleID="k8s-pod-network.96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.490 [INFO][5746] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.490 [INFO][5746] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.497 [WARNING][5746] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" HandleID="k8s-pod-network.96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.497 [INFO][5746] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" HandleID="k8s-pod-network.96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.500 [INFO][5746] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:31.511711 containerd[1469]: 2025-07-06 23:50:31.506 [INFO][5738] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:31.511711 containerd[1469]: time="2025-07-06T23:50:31.510870603Z" level=info msg="TearDown network for sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\" successfully" Jul 6 23:50:31.511711 containerd[1469]: time="2025-07-06T23:50:31.510912735Z" level=info msg="StopPodSandbox for \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\" returns successfully" Jul 6 23:50:31.512206 containerd[1469]: time="2025-07-06T23:50:31.511903586Z" level=info msg="RemovePodSandbox for \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\"" Jul 6 23:50:31.512206 containerd[1469]: time="2025-07-06T23:50:31.511932873Z" level=info msg="Forcibly stopping sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\"" Jul 6 23:50:31.537014 sshd[5684]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:31.549595 systemd[1]: sshd@12-10.0.0.53:22-10.0.0.1:56138.service: Deactivated successfully. Jul 6 23:50:31.555447 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:50:31.560248 systemd-logind[1450]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:50:31.571498 systemd[1]: Started sshd@13-10.0.0.53:22-10.0.0.1:56144.service - OpenSSH per-connection server daemon (10.0.0.1:56144). Jul 6 23:50:31.577835 systemd-logind[1450]: Removed session 13. Jul 6 23:50:31.604040 sshd[5773]: Accepted publickey for core from 10.0.0.1 port 56144 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:31.605913 sshd[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:31.610245 systemd-logind[1450]: New session 14 of user core. Jul 6 23:50:31.615758 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.572 [WARNING][5763] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0", GenerateName:"calico-apiserver-865bb6f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6baffdc-d693-4f2e-98c3-45c2d2376ca7", ResourceVersion:"1192", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865bb6f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"967151756a9e785871df5d5f9810cf84bf681fa423e22b484cac74f54532383a", Pod:"calico-apiserver-865bb6f9f-bcvjq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali119b2ad589c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.572 [INFO][5763] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.572 [INFO][5763] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" iface="eth0" netns="" Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.572 [INFO][5763] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.572 [INFO][5763] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.605 [INFO][5777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" HandleID="k8s-pod-network.96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.605 [INFO][5777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.605 [INFO][5777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.611 [WARNING][5777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" HandleID="k8s-pod-network.96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.611 [INFO][5777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" HandleID="k8s-pod-network.96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Workload="localhost-k8s-calico--apiserver--865bb6f9f--bcvjq-eth0" Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.612 [INFO][5777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:31.619278 containerd[1469]: 2025-07-06 23:50:31.615 [INFO][5763] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b" Jul 6 23:50:31.620741 containerd[1469]: time="2025-07-06T23:50:31.619668865Z" level=info msg="TearDown network for sandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\" successfully" Jul 6 23:50:31.627097 containerd[1469]: time="2025-07-06T23:50:31.627020748Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:50:31.627097 containerd[1469]: time="2025-07-06T23:50:31.627089921Z" level=info msg="RemovePodSandbox \"96c0a878066acffcbc6d4c4b3e3a03b357664b085f8b0994372b5ac98bf47a9b\" returns successfully" Jul 6 23:50:31.658915 kubelet[2511]: E0706 23:50:31.658876 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:31.666871 kubelet[2511]: E0706 23:50:31.665518 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:31.693412 kubelet[2511]: I0706 23:50:31.693355 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-865bb6f9f-jwkmx" podStartSLOduration=48.693336717 podStartE2EDuration="48.693336717s" podCreationTimestamp="2025-07-06 23:49:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:50:31.692733724 +0000 UTC m=+64.680060003" watchObservedRunningTime="2025-07-06 23:50:31.693336717 +0000 UTC m=+64.680662996" Jul 6 23:50:31.695563 kubelet[2511]: I0706 23:50:31.694453 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-xhnqf" podStartSLOduration=58.694445355 podStartE2EDuration="58.694445355s" podCreationTimestamp="2025-07-06 23:49:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:50:31.674797353 +0000 UTC m=+64.662123652" watchObservedRunningTime="2025-07-06 23:50:31.694445355 +0000 UTC m=+64.681771634" Jul 6 23:50:31.814740 sshd[5773]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:31.819425 systemd[1]: sshd@13-10.0.0.53:22-10.0.0.1:56144.service: Deactivated successfully. Jul 6 23:50:31.823113 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:50:31.824556 systemd-logind[1450]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:50:31.826797 systemd-logind[1450]: Removed session 14. Jul 6 23:50:32.667955 kubelet[2511]: E0706 23:50:32.667915 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:32.668781 kubelet[2511]: E0706 23:50:32.668683 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:33.283918 containerd[1469]: time="2025-07-06T23:50:33.283847746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:33.284900 containerd[1469]: time="2025-07-06T23:50:33.284863994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 6 23:50:33.286188 containerd[1469]: time="2025-07-06T23:50:33.286128690Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:33.288588 containerd[1469]: time="2025-07-06T23:50:33.288557599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:33.289434 containerd[1469]: time="2025-07-06T23:50:33.289376556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.949878788s" Jul 6 23:50:33.289434 containerd[1469]: time="2025-07-06T23:50:33.289421873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 6 23:50:33.290968 containerd[1469]: time="2025-07-06T23:50:33.290935970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 6 23:50:33.299201 containerd[1469]: time="2025-07-06T23:50:33.299108722Z" level=info msg="CreateContainer within sandbox \"3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 6 23:50:33.311448 containerd[1469]: time="2025-07-06T23:50:33.311387674Z" level=info msg="CreateContainer within sandbox \"3eb4df3add08b9d54714d4b43045af585c5d939e2ef7bebeb9b3e5195602e80b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"e17be5cd9ee99ce7010773e9d22dcbde21ff0be659a44f8a6ba436821809d84d\"" Jul 6 23:50:33.312034 containerd[1469]: time="2025-07-06T23:50:33.311978311Z" level=info msg="StartContainer for \"e17be5cd9ee99ce7010773e9d22dcbde21ff0be659a44f8a6ba436821809d84d\"" Jul 6 23:50:33.368701 systemd[1]: Started cri-containerd-e17be5cd9ee99ce7010773e9d22dcbde21ff0be659a44f8a6ba436821809d84d.scope - libcontainer container e17be5cd9ee99ce7010773e9d22dcbde21ff0be659a44f8a6ba436821809d84d. Jul 6 23:50:33.420353 containerd[1469]: time="2025-07-06T23:50:33.420302725Z" level=info msg="StartContainer for \"e17be5cd9ee99ce7010773e9d22dcbde21ff0be659a44f8a6ba436821809d84d\" returns successfully" Jul 6 23:50:33.677035 kubelet[2511]: E0706 23:50:33.676983 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:33.680688 containerd[1469]: time="2025-07-06T23:50:33.680636570Z" level=info msg="StopContainer for \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\" with timeout 30 (s)" Jul 6 23:50:33.681869 containerd[1469]: time="2025-07-06T23:50:33.681830119Z" level=info msg="Stop container \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\" with signal terminated" Jul 6 23:50:33.689503 kubelet[2511]: I0706 23:50:33.689101 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c44cf5b79-2q9kp" podStartSLOduration=27.392151662 podStartE2EDuration="45.6890827s" podCreationTimestamp="2025-07-06 23:49:48 +0000 UTC" firstStartedPulling="2025-07-06 23:50:14.993210341 +0000 UTC m=+47.980536610" lastFinishedPulling="2025-07-06 23:50:33.290141379 +0000 UTC m=+66.277467648" observedRunningTime="2025-07-06 23:50:33.687586057 +0000 UTC m=+66.674912336" watchObservedRunningTime="2025-07-06 23:50:33.6890827 +0000 UTC m=+66.676408979" Jul 6 23:50:33.701405 systemd[1]: cri-containerd-3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a.scope: Deactivated successfully. Jul 6 23:50:34.082357 containerd[1469]: time="2025-07-06T23:50:34.082275173Z" level=info msg="shim disconnected" id=3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a namespace=k8s.io Jul 6 23:50:34.082357 containerd[1469]: time="2025-07-06T23:50:34.082343766Z" level=warning msg="cleaning up after shim disconnected" id=3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a namespace=k8s.io Jul 6 23:50:34.082357 containerd[1469]: time="2025-07-06T23:50:34.082354896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:50:34.101512 containerd[1469]: time="2025-07-06T23:50:34.101463006Z" level=info msg="StopContainer for \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\" returns successfully" Jul 6 23:50:34.102029 containerd[1469]: time="2025-07-06T23:50:34.102004649Z" level=info msg="StopPodSandbox for \"4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23\"" Jul 6 23:50:34.102071 containerd[1469]: time="2025-07-06T23:50:34.102031230Z" level=info msg="Container to stop \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:50:34.108996 systemd[1]: cri-containerd-4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23.scope: Deactivated successfully. Jul 6 23:50:34.133111 containerd[1469]: time="2025-07-06T23:50:34.132910286Z" level=info msg="shim disconnected" id=4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23 namespace=k8s.io Jul 6 23:50:34.133111 containerd[1469]: time="2025-07-06T23:50:34.132985401Z" level=warning msg="cleaning up after shim disconnected" id=4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23 namespace=k8s.io Jul 6 23:50:34.133111 containerd[1469]: time="2025-07-06T23:50:34.132997054Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:50:34.192916 systemd-networkd[1393]: cali7a907f6155a: Link DOWN Jul 6 23:50:34.192929 systemd-networkd[1393]: cali7a907f6155a: Lost carrier Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.191 [INFO][5930] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.191 [INFO][5930] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" iface="eth0" netns="/var/run/netns/cni-48b6a1c9-beab-7d29-2847-2a37ebff5000" Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.192 [INFO][5930] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" iface="eth0" netns="/var/run/netns/cni-48b6a1c9-beab-7d29-2847-2a37ebff5000" Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.203 [INFO][5930] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" after=12.139166ms iface="eth0" netns="/var/run/netns/cni-48b6a1c9-beab-7d29-2847-2a37ebff5000" Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.203 [INFO][5930] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.203 [INFO][5930] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.231 [INFO][5941] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" HandleID="k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.231 [INFO][5941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.231 [INFO][5941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.260 [INFO][5941] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" HandleID="k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.261 [INFO][5941] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" HandleID="k8s-pod-network.4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.262 [INFO][5941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:34.268518 containerd[1469]: 2025-07-06 23:50:34.265 [INFO][5930] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23" Jul 6 23:50:34.268982 containerd[1469]: time="2025-07-06T23:50:34.268784220Z" level=info msg="TearDown network for sandbox \"4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23\" successfully" Jul 6 23:50:34.268982 containerd[1469]: time="2025-07-06T23:50:34.268824297Z" level=info msg="StopPodSandbox for \"4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23\" returns successfully" Jul 6 23:50:34.269457 containerd[1469]: time="2025-07-06T23:50:34.269399814Z" level=info msg="StopPodSandbox for \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\"" Jul 6 23:50:34.298351 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a-rootfs.mount: Deactivated successfully. Jul 6 23:50:34.298483 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23-rootfs.mount: Deactivated successfully. Jul 6 23:50:34.298579 systemd[1]: run-netns-cni\x2d48b6a1c9\x2dbeab\x2d7d29\x2d2847\x2d2a37ebff5000.mount: Deactivated successfully. Jul 6 23:50:34.298653 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23-shm.mount: Deactivated successfully. Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.308 [WARNING][5962] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0", GenerateName:"calico-apiserver-865bb6f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9626945a-0af4-4eaa-ac43-a94044095a5d", ResourceVersion:"1279", Generation:0, CreationTimestamp:time.Date(2025, time.July, 6, 23, 49, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"865bb6f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e5be7dfad8f162ce9150c0d3dc1a3f62224fd7f4652a01778195d5d65dccb23", Pod:"calico-apiserver-865bb6f9f-jwkmx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7a907f6155a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.308 [INFO][5962] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.308 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" iface="eth0" netns="" Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.308 [INFO][5962] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.308 [INFO][5962] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.329 [INFO][5970] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" HandleID="k8s-pod-network.2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.329 [INFO][5970] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.330 [INFO][5970] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.336 [WARNING][5970] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" HandleID="k8s-pod-network.2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.336 [INFO][5970] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" HandleID="k8s-pod-network.2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Workload="localhost-k8s-calico--apiserver--865bb6f9f--jwkmx-eth0" Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.338 [INFO][5970] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 6 23:50:34.343914 containerd[1469]: 2025-07-06 23:50:34.340 [INFO][5962] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f" Jul 6 23:50:34.343914 containerd[1469]: time="2025-07-06T23:50:34.343876609Z" level=info msg="TearDown network for sandbox \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\" successfully" Jul 6 23:50:34.343914 containerd[1469]: time="2025-07-06T23:50:34.343907638Z" level=info msg="StopPodSandbox for \"2f577ae31f0013f7d8f911bd01f5719b6e6e4f2f792e5ac1d0ddcd0ba40ecb4f\" returns successfully" Jul 6 23:50:34.457376 kubelet[2511]: I0706 23:50:34.457314 2511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf72l\" (UniqueName: \"kubernetes.io/projected/9626945a-0af4-4eaa-ac43-a94044095a5d-kube-api-access-nf72l\") pod \"9626945a-0af4-4eaa-ac43-a94044095a5d\" (UID: \"9626945a-0af4-4eaa-ac43-a94044095a5d\") " Jul 6 23:50:34.457376 kubelet[2511]: I0706 23:50:34.457365 2511 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9626945a-0af4-4eaa-ac43-a94044095a5d-calico-apiserver-certs\") pod \"9626945a-0af4-4eaa-ac43-a94044095a5d\" (UID: \"9626945a-0af4-4eaa-ac43-a94044095a5d\") " Jul 6 23:50:34.463348 kubelet[2511]: I0706 23:50:34.463306 2511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9626945a-0af4-4eaa-ac43-a94044095a5d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "9626945a-0af4-4eaa-ac43-a94044095a5d" (UID: "9626945a-0af4-4eaa-ac43-a94044095a5d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:50:34.463419 systemd[1]: var-lib-kubelet-pods-9626945a\x2d0af4\x2d4eaa\x2dac43\x2da94044095a5d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnf72l.mount: Deactivated successfully. Jul 6 23:50:34.463650 systemd[1]: var-lib-kubelet-pods-9626945a\x2d0af4\x2d4eaa\x2dac43\x2da94044095a5d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 6 23:50:34.464043 kubelet[2511]: I0706 23:50:34.463624 2511 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9626945a-0af4-4eaa-ac43-a94044095a5d-kube-api-access-nf72l" (OuterVolumeSpecName: "kube-api-access-nf72l") pod "9626945a-0af4-4eaa-ac43-a94044095a5d" (UID: "9626945a-0af4-4eaa-ac43-a94044095a5d"). InnerVolumeSpecName "kube-api-access-nf72l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:50:34.558528 kubelet[2511]: I0706 23:50:34.558475 2511 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf72l\" (UniqueName: \"kubernetes.io/projected/9626945a-0af4-4eaa-ac43-a94044095a5d-kube-api-access-nf72l\") on node \"localhost\" DevicePath \"\"" Jul 6 23:50:34.558528 kubelet[2511]: I0706 23:50:34.558505 2511 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9626945a-0af4-4eaa-ac43-a94044095a5d-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 6 23:50:34.680174 kubelet[2511]: I0706 23:50:34.680032 2511 scope.go:117] "RemoveContainer" containerID="3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a" Jul 6 23:50:34.681725 containerd[1469]: time="2025-07-06T23:50:34.681635724Z" level=info msg="RemoveContainer for \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\"" Jul 6 23:50:34.689234 systemd[1]: Removed slice kubepods-besteffort-pod9626945a_0af4_4eaa_ac43_a94044095a5d.slice - libcontainer container kubepods-besteffort-pod9626945a_0af4_4eaa_ac43_a94044095a5d.slice. Jul 6 23:50:34.690019 containerd[1469]: time="2025-07-06T23:50:34.689439615Z" level=info msg="RemoveContainer for \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\" returns successfully" Jul 6 23:50:34.690518 kubelet[2511]: I0706 23:50:34.690077 2511 scope.go:117] "RemoveContainer" containerID="3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a" Jul 6 23:50:34.698795 containerd[1469]: time="2025-07-06T23:50:34.690411666Z" level=error msg="ContainerStatus for \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\": not found" Jul 6 23:50:34.702765 kubelet[2511]: E0706 23:50:34.702691 2511 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\": not found" containerID="3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a" Jul 6 23:50:34.702913 kubelet[2511]: I0706 23:50:34.702766 2511 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a"} err="failed to get container status \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\": rpc error: code = NotFound desc = an error occurred when try to find container \"3df2992bbb0775de2258219f28d79decc812897a358c0923ec07e1e1fc7d409a\": not found" Jul 6 23:50:35.092040 kubelet[2511]: I0706 23:50:35.091986 2511 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9626945a-0af4-4eaa-ac43-a94044095a5d" path="/var/lib/kubelet/pods/9626945a-0af4-4eaa-ac43-a94044095a5d/volumes" Jul 6 23:50:36.020602 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount831662940.mount: Deactivated successfully. Jul 6 23:50:36.828570 systemd[1]: Started sshd@14-10.0.0.53:22-10.0.0.1:56160.service - OpenSSH per-connection server daemon (10.0.0.1:56160). Jul 6 23:50:36.891037 sshd[6019]: Accepted publickey for core from 10.0.0.1 port 56160 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:36.892919 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:36.897487 systemd-logind[1450]: New session 15 of user core. Jul 6 23:50:36.906815 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:50:36.936764 containerd[1469]: time="2025-07-06T23:50:36.936687145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:36.937949 containerd[1469]: time="2025-07-06T23:50:36.937834069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 6 23:50:36.939196 containerd[1469]: time="2025-07-06T23:50:36.939130501Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:36.943848 containerd[1469]: time="2025-07-06T23:50:36.943801018Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:36.945491 containerd[1469]: time="2025-07-06T23:50:36.945443847Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 3.654476496s" Jul 6 23:50:36.945559 containerd[1469]: time="2025-07-06T23:50:36.945490075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 6 23:50:36.947984 containerd[1469]: time="2025-07-06T23:50:36.947808973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 6 23:50:36.950924 containerd[1469]: time="2025-07-06T23:50:36.950878172Z" level=info msg="CreateContainer within sandbox \"64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 6 23:50:36.980475 containerd[1469]: time="2025-07-06T23:50:36.980327489Z" level=info msg="CreateContainer within sandbox \"64bc16a14f4e41bb4fcbec79a14b4c47860fc9908b384e5a200873bbf4d6a45d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"208d723b5c891cdc2a9b4c3e465e17952ca77d0cf39ab9814c57b3480dbb3bf9\"" Jul 6 23:50:36.981828 containerd[1469]: time="2025-07-06T23:50:36.981723903Z" level=info msg="StartContainer for \"208d723b5c891cdc2a9b4c3e465e17952ca77d0cf39ab9814c57b3480dbb3bf9\"" Jul 6 23:50:37.046720 systemd[1]: Started cri-containerd-208d723b5c891cdc2a9b4c3e465e17952ca77d0cf39ab9814c57b3480dbb3bf9.scope - libcontainer container 208d723b5c891cdc2a9b4c3e465e17952ca77d0cf39ab9814c57b3480dbb3bf9. Jul 6 23:50:37.097410 containerd[1469]: time="2025-07-06T23:50:37.097253773Z" level=info msg="StartContainer for \"208d723b5c891cdc2a9b4c3e465e17952ca77d0cf39ab9814c57b3480dbb3bf9\" returns successfully" Jul 6 23:50:37.129673 sshd[6019]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:37.134168 systemd[1]: sshd@14-10.0.0.53:22-10.0.0.1:56160.service: Deactivated successfully. Jul 6 23:50:37.137967 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:50:37.139209 systemd-logind[1450]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:50:37.140629 systemd-logind[1450]: Removed session 15. Jul 6 23:50:37.700759 kubelet[2511]: I0706 23:50:37.700261 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-d9k6p" podStartSLOduration=30.386817872 podStartE2EDuration="50.700240334s" podCreationTimestamp="2025-07-06 23:49:47 +0000 UTC" firstStartedPulling="2025-07-06 23:50:16.634229818 +0000 UTC m=+49.621556098" lastFinishedPulling="2025-07-06 23:50:36.947652281 +0000 UTC m=+69.934978560" observedRunningTime="2025-07-06 23:50:37.69998281 +0000 UTC m=+70.687309089" watchObservedRunningTime="2025-07-06 23:50:37.700240334 +0000 UTC m=+70.687566623" Jul 6 23:50:37.713557 systemd[1]: run-containerd-runc-k8s.io-208d723b5c891cdc2a9b4c3e465e17952ca77d0cf39ab9814c57b3480dbb3bf9-runc.XekfZt.mount: Deactivated successfully. Jul 6 23:50:40.118092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1505747906.mount: Deactivated successfully. Jul 6 23:50:41.204984 containerd[1469]: time="2025-07-06T23:50:41.204922633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:41.246041 containerd[1469]: time="2025-07-06T23:50:41.246001493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 6 23:50:41.321728 containerd[1469]: time="2025-07-06T23:50:41.321690757Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:41.428099 containerd[1469]: time="2025-07-06T23:50:41.428036758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:41.428861 containerd[1469]: time="2025-07-06T23:50:41.428811603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 4.480967584s" Jul 6 23:50:41.428861 containerd[1469]: time="2025-07-06T23:50:41.428846871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 6 23:50:41.429845 containerd[1469]: time="2025-07-06T23:50:41.429817301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 6 23:50:41.435850 containerd[1469]: time="2025-07-06T23:50:41.435810475Z" level=info msg="CreateContainer within sandbox \"6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 6 23:50:41.685262 containerd[1469]: time="2025-07-06T23:50:41.685203319Z" level=info msg="CreateContainer within sandbox \"6a2d46e2ed0fa01b4a8b34ab02b096557b595a05d00791bf783aba38c557c163\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9abed494e05774e65c5a0e6bfbd036c88db2aa6fd0a521f5805cf35cfd3bb2bb\"" Jul 6 23:50:41.687094 containerd[1469]: time="2025-07-06T23:50:41.685847944Z" level=info msg="StartContainer for \"9abed494e05774e65c5a0e6bfbd036c88db2aa6fd0a521f5805cf35cfd3bb2bb\"" Jul 6 23:50:41.728742 systemd[1]: Started cri-containerd-9abed494e05774e65c5a0e6bfbd036c88db2aa6fd0a521f5805cf35cfd3bb2bb.scope - libcontainer container 9abed494e05774e65c5a0e6bfbd036c88db2aa6fd0a521f5805cf35cfd3bb2bb. Jul 6 23:50:41.775768 containerd[1469]: time="2025-07-06T23:50:41.775724615Z" level=info msg="StartContainer for \"9abed494e05774e65c5a0e6bfbd036c88db2aa6fd0a521f5805cf35cfd3bb2bb\" returns successfully" Jul 6 23:50:42.142019 systemd[1]: Started sshd@15-10.0.0.53:22-10.0.0.1:56196.service - OpenSSH per-connection server daemon (10.0.0.1:56196). Jul 6 23:50:42.190429 sshd[6174]: Accepted publickey for core from 10.0.0.1 port 56196 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:42.192237 sshd[6174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:42.196186 systemd-logind[1450]: New session 16 of user core. Jul 6 23:50:42.207708 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:50:42.375808 sshd[6174]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:42.380012 systemd[1]: sshd@15-10.0.0.53:22-10.0.0.1:56196.service: Deactivated successfully. Jul 6 23:50:42.382002 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:50:42.382730 systemd-logind[1450]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:50:42.383910 systemd-logind[1450]: Removed session 16. Jul 6 23:50:43.184437 containerd[1469]: time="2025-07-06T23:50:43.184343770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:43.185195 containerd[1469]: time="2025-07-06T23:50:43.185124173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 6 23:50:43.186423 containerd[1469]: time="2025-07-06T23:50:43.186366181Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:43.188863 containerd[1469]: time="2025-07-06T23:50:43.188813716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:43.189675 containerd[1469]: time="2025-07-06T23:50:43.189610672Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.759759766s" Jul 6 23:50:43.189675 containerd[1469]: time="2025-07-06T23:50:43.189660227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 6 23:50:43.192574 containerd[1469]: time="2025-07-06T23:50:43.192520411Z" level=info msg="CreateContainer within sandbox \"2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 6 23:50:43.211713 containerd[1469]: time="2025-07-06T23:50:43.211642395Z" level=info msg="CreateContainer within sandbox \"2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4d8efea6a6c9445e0dc08653f8c1d9b7988e28177b0c326101b8f576e834180d\"" Jul 6 23:50:43.212572 containerd[1469]: time="2025-07-06T23:50:43.212508372Z" level=info msg="StartContainer for \"4d8efea6a6c9445e0dc08653f8c1d9b7988e28177b0c326101b8f576e834180d\"" Jul 6 23:50:43.254907 systemd[1]: Started cri-containerd-4d8efea6a6c9445e0dc08653f8c1d9b7988e28177b0c326101b8f576e834180d.scope - libcontainer container 4d8efea6a6c9445e0dc08653f8c1d9b7988e28177b0c326101b8f576e834180d. Jul 6 23:50:43.711042 containerd[1469]: time="2025-07-06T23:50:43.710892591Z" level=info msg="StartContainer for \"4d8efea6a6c9445e0dc08653f8c1d9b7988e28177b0c326101b8f576e834180d\" returns successfully" Jul 6 23:50:43.714074 containerd[1469]: time="2025-07-06T23:50:43.713834112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 6 23:50:45.675767 containerd[1469]: time="2025-07-06T23:50:45.675714506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:45.676586 containerd[1469]: time="2025-07-06T23:50:45.676528654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 6 23:50:45.678008 containerd[1469]: time="2025-07-06T23:50:45.677949861Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:45.680186 containerd[1469]: time="2025-07-06T23:50:45.680155148Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:50:45.681034 containerd[1469]: time="2025-07-06T23:50:45.681001086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.967115886s" Jul 6 23:50:45.681102 containerd[1469]: time="2025-07-06T23:50:45.681059848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 6 23:50:45.682915 containerd[1469]: time="2025-07-06T23:50:45.682886181Z" level=info msg="CreateContainer within sandbox \"2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 6 23:50:45.697522 containerd[1469]: time="2025-07-06T23:50:45.697466041Z" level=info msg="CreateContainer within sandbox \"2f17aec1d737f631836c8e533c5ec914a662bc167894a7165fda72527c2259b3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8ea163a7d45e5504d952f274c1bc47a72efeb769e929e7cc84e9122a4b6d6cf8\"" Jul 6 23:50:45.698715 containerd[1469]: time="2025-07-06T23:50:45.698597374Z" level=info msg="StartContainer for \"8ea163a7d45e5504d952f274c1bc47a72efeb769e929e7cc84e9122a4b6d6cf8\"" Jul 6 23:50:45.736680 systemd[1]: Started cri-containerd-8ea163a7d45e5504d952f274c1bc47a72efeb769e929e7cc84e9122a4b6d6cf8.scope - libcontainer container 8ea163a7d45e5504d952f274c1bc47a72efeb769e929e7cc84e9122a4b6d6cf8. Jul 6 23:50:45.772104 containerd[1469]: time="2025-07-06T23:50:45.772042828Z" level=info msg="StartContainer for \"8ea163a7d45e5504d952f274c1bc47a72efeb769e929e7cc84e9122a4b6d6cf8\" returns successfully" Jul 6 23:50:46.089707 kubelet[2511]: E0706 23:50:46.089654 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:46.317379 kubelet[2511]: I0706 23:50:46.317315 2511 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 6 23:50:46.317379 kubelet[2511]: I0706 23:50:46.317360 2511 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 6 23:50:46.774399 kubelet[2511]: I0706 23:50:46.773872 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6f9cd94d88-l6xgj" podStartSLOduration=7.145839484 podStartE2EDuration="33.773854736s" podCreationTimestamp="2025-07-06 23:50:13 +0000 UTC" firstStartedPulling="2025-07-06 23:50:14.801644907 +0000 UTC m=+47.788971186" lastFinishedPulling="2025-07-06 23:50:41.429660149 +0000 UTC m=+74.416986438" observedRunningTime="2025-07-06 23:50:42.714808518 +0000 UTC m=+75.702134797" watchObservedRunningTime="2025-07-06 23:50:46.773854736 +0000 UTC m=+79.761181016" Jul 6 23:50:46.774399 kubelet[2511]: I0706 23:50:46.774048 2511 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dkdw8" podStartSLOduration=42.131373265 podStartE2EDuration="58.77404484s" podCreationTimestamp="2025-07-06 23:49:48 +0000 UTC" firstStartedPulling="2025-07-06 23:50:29.039121645 +0000 UTC m=+62.026447924" lastFinishedPulling="2025-07-06 23:50:45.68179322 +0000 UTC m=+78.669119499" observedRunningTime="2025-07-06 23:50:46.773300317 +0000 UTC m=+79.760626596" watchObservedRunningTime="2025-07-06 23:50:46.77404484 +0000 UTC m=+79.761371109" Jul 6 23:50:47.389380 systemd[1]: Started sshd@16-10.0.0.53:22-10.0.0.1:56208.service - OpenSSH per-connection server daemon (10.0.0.1:56208). Jul 6 23:50:47.462518 sshd[6275]: Accepted publickey for core from 10.0.0.1 port 56208 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:47.464830 sshd[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:47.469785 systemd-logind[1450]: New session 17 of user core. Jul 6 23:50:47.479812 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:50:48.110718 sshd[6275]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:48.115800 systemd[1]: sshd@16-10.0.0.53:22-10.0.0.1:56208.service: Deactivated successfully. Jul 6 23:50:48.118222 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:50:48.119120 systemd-logind[1450]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:50:48.120219 systemd-logind[1450]: Removed session 17. Jul 6 23:50:51.090493 kubelet[2511]: E0706 23:50:51.089676 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:50:53.120488 systemd[1]: Started sshd@17-10.0.0.53:22-10.0.0.1:59528.service - OpenSSH per-connection server daemon (10.0.0.1:59528). Jul 6 23:50:53.178691 sshd[6314]: Accepted publickey for core from 10.0.0.1 port 59528 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:53.180552 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:53.184625 systemd-logind[1450]: New session 18 of user core. Jul 6 23:50:53.192648 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:50:53.364335 sshd[6314]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:53.369256 systemd[1]: sshd@17-10.0.0.53:22-10.0.0.1:59528.service: Deactivated successfully. Jul 6 23:50:53.372387 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:50:53.373690 systemd-logind[1450]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:50:53.374732 systemd-logind[1450]: Removed session 18. Jul 6 23:50:58.387393 systemd[1]: Started sshd@18-10.0.0.53:22-10.0.0.1:44314.service - OpenSSH per-connection server daemon (10.0.0.1:44314). Jul 6 23:50:58.433968 sshd[6337]: Accepted publickey for core from 10.0.0.1 port 44314 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:58.435751 sshd[6337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:58.439706 systemd-logind[1450]: New session 19 of user core. Jul 6 23:50:58.449686 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:50:58.571773 sshd[6337]: pam_unix(sshd:session): session closed for user core Jul 6 23:50:58.583572 systemd[1]: sshd@18-10.0.0.53:22-10.0.0.1:44314.service: Deactivated successfully. Jul 6 23:50:58.585566 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:50:58.587432 systemd-logind[1450]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:50:58.594888 systemd[1]: Started sshd@19-10.0.0.53:22-10.0.0.1:44318.service - OpenSSH per-connection server daemon (10.0.0.1:44318). Jul 6 23:50:58.595789 systemd-logind[1450]: Removed session 19. Jul 6 23:50:58.624606 sshd[6351]: Accepted publickey for core from 10.0.0.1 port 44318 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:50:58.626258 sshd[6351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:50:58.630282 systemd-logind[1450]: New session 20 of user core. Jul 6 23:50:58.635747 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:51:00.034811 sshd[6351]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:00.044645 systemd[1]: sshd@19-10.0.0.53:22-10.0.0.1:44318.service: Deactivated successfully. Jul 6 23:51:00.046783 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:51:00.048220 systemd-logind[1450]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:51:00.052983 systemd[1]: Started sshd@20-10.0.0.53:22-10.0.0.1:44330.service - OpenSSH per-connection server daemon (10.0.0.1:44330). Jul 6 23:51:00.053908 systemd-logind[1450]: Removed session 20. Jul 6 23:51:00.097924 sshd[6383]: Accepted publickey for core from 10.0.0.1 port 44330 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:51:00.099550 sshd[6383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:00.103648 systemd-logind[1450]: New session 21 of user core. Jul 6 23:51:00.111700 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:51:01.091717 kubelet[2511]: E0706 23:51:01.091665 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:51:02.089770 kubelet[2511]: E0706 23:51:02.089710 2511 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:51:02.510178 sshd[6383]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:02.520498 systemd[1]: sshd@20-10.0.0.53:22-10.0.0.1:44330.service: Deactivated successfully. Jul 6 23:51:02.526424 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:51:02.527219 systemd-logind[1450]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:51:02.537429 systemd[1]: Started sshd@21-10.0.0.53:22-10.0.0.1:44336.service - OpenSSH per-connection server daemon (10.0.0.1:44336). Jul 6 23:51:02.539527 systemd-logind[1450]: Removed session 21. Jul 6 23:51:02.571994 sshd[6434]: Accepted publickey for core from 10.0.0.1 port 44336 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:51:02.573913 sshd[6434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:02.578561 systemd-logind[1450]: New session 22 of user core. Jul 6 23:51:02.586779 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:51:02.968168 sshd[6434]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:02.976567 systemd[1]: sshd@21-10.0.0.53:22-10.0.0.1:44336.service: Deactivated successfully. Jul 6 23:51:02.978972 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:51:02.981605 systemd-logind[1450]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:51:02.989019 systemd[1]: Started sshd@22-10.0.0.53:22-10.0.0.1:44346.service - OpenSSH per-connection server daemon (10.0.0.1:44346). Jul 6 23:51:02.991063 systemd-logind[1450]: Removed session 22. Jul 6 23:51:03.017698 sshd[6446]: Accepted publickey for core from 10.0.0.1 port 44346 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:51:03.019369 sshd[6446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:03.023871 systemd-logind[1450]: New session 23 of user core. Jul 6 23:51:03.034689 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:51:03.151030 sshd[6446]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:03.156304 systemd[1]: sshd@22-10.0.0.53:22-10.0.0.1:44346.service: Deactivated successfully. Jul 6 23:51:03.158732 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:51:03.160122 systemd-logind[1450]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:51:03.161272 systemd-logind[1450]: Removed session 23. Jul 6 23:51:08.164923 systemd[1]: Started sshd@23-10.0.0.53:22-10.0.0.1:54780.service - OpenSSH per-connection server daemon (10.0.0.1:54780). Jul 6 23:51:08.218566 sshd[6466]: Accepted publickey for core from 10.0.0.1 port 54780 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:51:08.219614 sshd[6466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:08.228546 systemd-logind[1450]: New session 24 of user core. Jul 6 23:51:08.235713 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:51:08.454345 sshd[6466]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:08.459149 systemd[1]: sshd@23-10.0.0.53:22-10.0.0.1:54780.service: Deactivated successfully. Jul 6 23:51:08.461464 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:51:08.462204 systemd-logind[1450]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:51:08.463754 systemd-logind[1450]: Removed session 24. Jul 6 23:51:13.465843 systemd[1]: Started sshd@24-10.0.0.53:22-10.0.0.1:54782.service - OpenSSH per-connection server daemon (10.0.0.1:54782). Jul 6 23:51:13.515290 sshd[6480]: Accepted publickey for core from 10.0.0.1 port 54782 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:51:13.517089 sshd[6480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:13.521050 systemd-logind[1450]: New session 25 of user core. Jul 6 23:51:13.526665 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:51:13.740356 sshd[6480]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:13.744502 systemd[1]: sshd@24-10.0.0.53:22-10.0.0.1:54782.service: Deactivated successfully. Jul 6 23:51:13.746743 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:51:13.747605 systemd-logind[1450]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:51:13.748396 systemd-logind[1450]: Removed session 25. Jul 6 23:51:18.765042 systemd[1]: Started sshd@25-10.0.0.53:22-10.0.0.1:52586.service - OpenSSH per-connection server daemon (10.0.0.1:52586). Jul 6 23:51:18.810273 sshd[6496]: Accepted publickey for core from 10.0.0.1 port 52586 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:51:18.812428 sshd[6496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:51:18.817585 systemd-logind[1450]: New session 26 of user core. Jul 6 23:51:18.822823 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 6 23:51:18.974769 sshd[6496]: pam_unix(sshd:session): session closed for user core Jul 6 23:51:18.978756 systemd[1]: sshd@25-10.0.0.53:22-10.0.0.1:52586.service: Deactivated successfully. Jul 6 23:51:18.981072 systemd[1]: session-26.scope: Deactivated successfully. Jul 6 23:51:18.981853 systemd-logind[1450]: Session 26 logged out. Waiting for processes to exit. Jul 6 23:51:18.982970 systemd-logind[1450]: Removed session 26. Jul 6 23:51:19.646205 systemd[1]: run-containerd-runc-k8s.io-8fdca8578fe419e5d4cea044d878bb810e6f303b27b24e2ec5e1ffd107b0d95c-runc.BMWoWP.mount: Deactivated successfully.